url
stringlengths 5
5.75k
| fetch_time
int64 1,368,854,400B
1,726,893,797B
| content_mime_type
stringclasses 3
values | warc_filename
stringlengths 108
138
| warc_record_offset
int32 873
1.79B
| warc_record_length
int32 587
790k
| text
stringlengths 30
1.05M
| token_count
int32 16
919k
| char_count
int32 30
1.05M
| metadata
stringlengths 439
444
| score
float64 2.52
5.13
| int_score
int64 3
5
| crawl
stringclasses 93
values | snapshot_type
stringclasses 2
values | language
stringclasses 1
value | language_score
float64 0.05
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://www.semesprit.com/36067/solid-liquid-gas-worksheet/ | 1,674,863,850,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499468.22/warc/CC-MAIN-20230127231443-20230128021443-00486.warc.gz | 1,011,335,395 | 26,878 | # Solid Liquid Gas Worksheet
A Solid Liquid Gas Worksheet is a spreadsheet used to measure the viscosity of gas. The Gas Worksheets are used to test the different types of gas. The mixtures will then be analyzed and plotted onto the worksheet.
There are three major types of liquid gasses that can be tested for. These are Ethane, Ethylene, and Butane. The Gas Worksheets are used to analyze these gases and their relative viscosity.
Solid gasses are much more viscous than their liquid counterparts. They are found in various gaseous forms, including; helium, neon, argon, xenon, and krypton. The liquid gasses consist of these same types of gases but they have been frozen and they are solid or semi-solid. The worksheet can also be used to test other gasses, such as methane, ethane, propane, butane, nitrogen, and carbon dioxide.
Liquid gasses are used in industry in the production of chemicals, fuels, lubricants, water, and other products. The worksheet is used in several industries to measure the viscosity of a liquid and it helps in understanding the other types of liquids. An analysis can also be created to determine the rate of heat transfer, based on the temperature and pressure of the liquid. The chart can also be used to compute a constant temperature for a gas that would be reached at a specific pressure and temperature.
The Solid Gases can be quantified by using a calculator. The two methods are either by adding up the specific heat capacity of the gas and dividing it by its volume. This would give you the size of the gas and would be the useful measurement of a gas. Or, you can calculate the specific heat capacity by using the molecular weight formula. This is also known as the Gibbs Formula.
The Gas worksheet contains the measurement of the specific heat capacity and the coefficient of static friction of the Solid Liquid Gas. The thermal conductivity and the entropy are also measured by this worksheet. An example of the entries on the chart would be the difference in the specific heat capacity of the Ethane vs. Water.
The LSA (or electric field) of a gas can be measured by plotting a graph. The LSA for a Gas will be a function of the atomic density and the molecular weight of the gas. This data can be graphed onto the Solid Liquid Gas Worksheet for better understanding.
Before starting to develop a worksheet for gas analysis, you must first know what type of gas to be tested and what features to be included in the worksheet. Most users of the Solid Liquid Gas Worksheet take into consideration the viscosity of the gas being tested, the molecular mass, the heat capacity, the electrical field, and the specific heat capacity. | 546 | 2,681 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.71875 | 3 | CC-MAIN-2023-06 | longest | en | 0.945979 |
http://pragcap.com/enhanced-tail-risk-in-the-less-risky | 1,419,285,592,000,000,000 | text/html | crawl-data/CC-MAIN-2014-52/segments/1418802777002.150/warc/CC-MAIN-20141217075257-00165-ip-10-231-17-201.ec2.internal.warc.gz | 210,635,014 | 9,570 | # Enhanced Tail Risk, in the Less Risky
By Salil Mehta, Statistical Ideas
There is more risk in less risky asset classes than one may think. This analysis looks at major equity and fixed income asset classes, both in the U.S., as well as internationally. And the study samples two decades of data, from 1990, to 2010. A period that is fairly representative of a lengthier history of markets, through current.
A higher-order measure of risk, named kurtosis, is designed to look at the relative thickness or thinness of the tail-ends of the distribution. Kurtosis can be used to look at the tail risk of an asset class, versus what we would see if it were normally distributed. Only some market participants know that financial market data do not follow a normal distribution, and even for those that do it is a common mistake to then not throw out a common assumption about the underlying kurtosis of the return distributions.
Kurtosis is calculated by taking the typical (return dispersion)4. By taking the fourth power, both positive and negative deviations become positive, and higher values take on significantly greater weight. Then when we see kurtosis levels of, say four or five, for the four risky assets on the right side of the chart below, we know that there has been very heavy distribution in the tails. And while kurtosis doesn’t distinguish between the upper tail and the lower tail, similar to the standard deviation measure, it should be noted that skew was negative for all of the asset classes shown here but for the non-U.S. bonds (for which skewness was virtually nonexistent). We introduce the name “leptokurtic”, which is defined as distributions with fatter tails than the normal distribution, such as the risky assets shown.
Since we see risky assets having this excess kurtosis in its return distribution, how does this relate to what we see in less risky asset classes (on the left of the chart above)? Here we look at bonds, both in the U.S. as well as internationally. And we see that the typical risk measure of standard deviation is about 1/3 that for risky assets (~5% versus ~17%). We might say this makes sense for bonds to have this lower risk, by the standard deviation measure. But what happens to those bonds on the higher-order, kurtosis statistic?
So to be sure, kurtosis is less for bonds than for stocks, regardless of geography. Though not by a lot. Bonds still have a higher degree of kurtosis than would be proportionally assumed by either the normal distribution, let alone the reduction in standard deviation risk of a non-normal distribution. In other words, there is greater tail risk from these “less risky” instruments, than most investors appreciate until after their downturn. This is likely further evidence that statistical aberrations in the markets, created simultaneous, correlated inefficiencies from multiple asset classes.
-------------------------------------------------------------------------------------------------------------------
Got a comment or question about this post? Send me a message on Twitter or send me an email here and I'll get back to you shortly.
### Salil Mehta
Salil Mehta is an applied statistician with more than 16 years of professional experience, including a dozen years on Wall Street performing proprietary trading and economic research for firms such as Salomon/Citigroup and Morgan Stanley. He served for two years as the Director of Analytics in the U.S. Department of the Treasury for the \$700 billion TARP program, and is the former Director of the Policy, Research, and Analysis Department in the Pension Benefit Guaranty Corporation. He completed a graduate degree in mathematical statistics from Harvard University as well as the Chartered Financial Analyst exams. More information about Salil and his research is available on his blog, Statistical Ideas. | 809 | 3,884 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2014-52 | latest | en | 0.952324 |
https://www.codezclub.com/c-program-calculate-factorial-using-recursion/ | 1,725,868,394,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651092.31/warc/CC-MAIN-20240909071529-20240909101529-00804.warc.gz | 666,670,845 | 20,320 | By | 28.11.2016
# Calculate factorial using recursion
Write a C program to calculate factorial using recursion. Here’s a Simple Program to find factorial of a number using recursive methods in C Programming Language.
This Program prompts user for entering any integer number, finds the factorial of input number and displays the output on screen.
## Factorial of a Number : :
A factorial of a number x is defined as the product of x and all positive integers below x. A factorial is product of all the number from 1 to the user specified number.
The factorial of a positive number n is given by ::
factorial of n (n!) = 1*2*3*4….n
The factorial of a negative number doesn’t exist. And the factorial of 0 is 1. You will learn to find the factorial of a number using recursion method in this example.
### Using Recursion : :
We will use a recursive user defined function to perform the task. Here we have a function `fact( )` that calls itself in a recursive manner to find out the factorial of input number.
Below is the source code for C program to calculate factorial using recursion which is successfully compiled and run on Windows System to produce desired output as shown below :
### SOURCE CODE : :
```/* C program to calculate factorial using recursion */
#include<stdio.h>
long int fact(int n);
int main( )
{
int num;
printf("Enter any number :: ");
scanf("%d", &num);
printf("\nUsing Recursion ---> \n");
if(num<0)
printf("\nError! No factorial for negative number...\n");
else
printf("\nFactorial of [ %d! ] is :: %ld\n", num, fact(num) );
return 0;
}/*End of main()*/
/*Recursive*/
long int fact(int n)
{
if(n == 0)
return(1);
return(n * fact(n-1));
}/*End of fact()*/```
### OUTPUT : :
```/* C program to calculate factorial using recursion */
Enter any number :: 7
Using Recursion --->
Factorial of [ 7! ] is :: 5040
Process returned 0```
If you found any error or any queries related to the above program or any questions or reviews , you wanna to ask from us ,you may Contact Us through our contact Page or you can also comment below in the comment section.We will try our best to reach upto you in the short interval.
3 1 vote
Article Rating
Category: C Programming Recursion Programs Tags: | 532 | 2,237 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.546875 | 4 | CC-MAIN-2024-38 | latest | en | 0.715324 |
https://db0nus869y26v.cloudfront.net/en/Method_of_matched_asymptotic_expansions | 1,639,044,361,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964363791.16/warc/CC-MAIN-20211209091917-20211209121917-00487.warc.gz | 257,814,721 | 19,375 | In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt.
## Method overview
In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series[1] found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small areas in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers, and as boundary or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain.
An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the "inner solution," and the other is the "outer solution," named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained.[2][3][4][5]
## A simple example
Consider the boundary value problem
${\displaystyle \varepsilon y''+(1+\varepsilon )y'+y=0,}$
where ${\displaystyle y}$ is a function of independent time variable ${\displaystyle t}$, which ranges from 0 to 1, the boundary conditions are ${\displaystyle y(0)=0}$ and ${\displaystyle y(1)=1}$, and ${\displaystyle \varepsilon }$ is a small parameter, such that ${\displaystyle 0<\varepsilon \ll 1}$.
### Outer solution, valid for t = O(1)
Since ${\displaystyle \varepsilon }$ is very small, our first approach is to treat the equation as a regular perturbation problem, i.e. make the approximation ${\displaystyle \varepsilon =0}$, and hence find the solution to the problem
${\displaystyle y'+y=0.\,}$
Alternatively, consider that when ${\displaystyle y}$ and ${\displaystyle t}$ are both of size O(1), the four terms on the left hand side of the original equation are respectively of sizes O(${\displaystyle \varepsilon }$), O(1), O(${\displaystyle \varepsilon }$) and O(1). The leading-order balance on this timescale, valid in the distinguished limit ${\displaystyle \varepsilon \to 0}$, is therefore given by the second and fourth terms, i.e. ${\displaystyle y'+y=0.\,}$
This has solution
${\displaystyle y=Ae^{-t}\,}$
for some constant ${\displaystyle A}$. Applying the boundary condition ${\displaystyle y(0)=0}$, we would have ${\displaystyle A=0}$; applying the boundary condition ${\displaystyle y(1)=1}$, we would have ${\displaystyle A=e}$. It is therefore impossible to satisfy both boundary conditions, so ${\displaystyle \varepsilon =0}$ is not a valid approximation to make across the whole of the domain (i.e. this is a singular perturbation problem). From this we infer that there must be a boundary layer at one of the endpoints of the domain where ${\displaystyle \varepsilon }$ needs to be included. This region will be where ${\displaystyle \varepsilon }$ is no longer negligible compared to the independent variable ${\displaystyle t}$, i.e. ${\displaystyle t}$ and ${\displaystyle \varepsilon }$ are of comparable size, i.e. the boundary layer is adjacent to ${\displaystyle t=0}$. Therefore, the other boundary condition ${\displaystyle y(1)=1}$ applies in this outer region, so ${\displaystyle A=e}$, i.e. ${\displaystyle y_{\mathrm {O} }=e^{1-t}\,}$ is an accurate approximate solution to the original boundary value problem in this outer region. It is the leading-order solution.
### Inner solution, valid for t = O(ε)
In the inner region, ${\displaystyle t}$ and ${\displaystyle \varepsilon }$ are both tiny, but of comparable size, so define the new O(1) time variable ${\displaystyle \tau =t/\varepsilon }$. Rescale the original boundary value problem by replacing ${\displaystyle t}$ with ${\displaystyle \tau \varepsilon }$, and the problem becomes
${\displaystyle {\frac {1}{\varepsilon ))y''(\tau )+\left({1+\varepsilon }\right){\frac {1}{\varepsilon ))y'(\tau )+y(\tau )=0,\,}$
which, after multiplying by ${\displaystyle \varepsilon }$ and taking ${\displaystyle \varepsilon =0}$, is
${\displaystyle y''+y'=0.\,}$
Alternatively, consider that when ${\displaystyle t}$ has reduced to size O(${\displaystyle \varepsilon }$), then ${\displaystyle y}$ is still of size O(1) (using the expression for ${\displaystyle y_{\mathrm {O} ))$), and so the four terms on the left hand side of the original equation are respectively of sizes O(${\displaystyle \varepsilon }$−1), O(${\displaystyle \varepsilon }$−1), O(1) and O(1). The leading-order balance on this timescale, valid in the distinguished limit ${\displaystyle \varepsilon \to 0}$, is therefore given by the first and second terms, i.e. ${\displaystyle y''+y'=0.\,}$
This has solution
${\displaystyle y=B-Ce^{-\tau }\,}$
for some constants ${\displaystyle B}$ and ${\displaystyle C}$. Since ${\displaystyle y(0)=0}$ applies in this inner region, this gives ${\displaystyle B=C}$, so an accurate approximate solution to the original boundary value problem in this inner region (it is the leading-order solution) is
${\displaystyle y_{\mathrm {I} }=B\left({1-e^{-\tau ))\right)=B\left({1-e^{-t/\varepsilon ))\right).\,}$
### Matching
We use matching to find the value of the constant ${\displaystyle B}$. The idea of matching is that the inner and outer solutions should agree for values of ${\displaystyle t}$ in an intermediate (or overlap) region, i.e. where ${\displaystyle \varepsilon \ll t\ll 1}$. We need the outer limit of the inner solution to match the inner limit of the outer solution, i.e. ${\displaystyle \lim _{\tau \rightarrow \infty }y_{\mathrm {I} }=\lim _{t\to 0}y_{\mathrm {O} },\,}$ which gives ${\displaystyle B=e}$.
The above problem is the simplest of the simple problems dealing with matched asymptotic expansions. One can immediately calculate that ${\displaystyle e^{1-t))$ is the entire asymptotic series for the outer region whereas the ${\displaystyle {\mathcal {O))(\epsilon )}$ correction to the inner solution ${\displaystyle y_{\mathrm {I} ))$ is ${\displaystyle B_{1}(1-e^{-t})-{\underline {et))}$ and the constant of integration ${\displaystyle B_{1))$ must be obtained from inner-outer matching.
Notice, the intuitive idea for matching of taking the limits i.e. ${\displaystyle \lim _{\tau \rightarrow \infty }y_{\mathrm {I} }=\lim _{t\to 0}y_{\mathrm {O} },\,}$ doesn't apply at this level. This is simply because the underlined term doesn't converge to a limit. The methods to follow in these types of cases are either to go for a) method of an intermediate variable or using b) the Van-Dyke matching rule. The former method is cumbersome and works always whereas the Van-Dyke matching rule is easy to implement but with limited applicability. A concrete boundary value problem having all the essential ingredients is the following.
Consider the boundary value problem
${\displaystyle \varepsilon y''-x^{2}y'-y=1,\quad y(0)=y(1)=1}$
The conventional outer expansion ${\displaystyle y_{\mathrm {O} }=y_{0}+\varepsilon y_{1}+...,}$ gives
${\displaystyle y_{0}=\alpha e^{1/x}-1}$, where ${\displaystyle \alpha }$ must be obtained from matching.
The problem has boundary layers both on the left and on the right. The left boundary layer near ${\displaystyle 0}$ has a thickness ${\displaystyle \varepsilon ^{1/2))$ whereas the right boundary layer near ${\displaystyle 1}$ has thickness ${\displaystyle \varepsilon }$. Let us first calculate the solution on the left boundary layer by rescaling ${\displaystyle X=x/\varepsilon ^{1/2},\;Y=y}$, then the differential equation to satisfy on the left is
${\displaystyle Y''-\varepsilon ^{1/2}X^{2}Y'-Y=1,\quad Y(0)=1}$
and accordingly, we assume an expansion ${\displaystyle Y^{l}=Y_{0}^{l}+\varepsilon ^{1/2}Y_{1/2}^{l}+...,}$.
The ${\displaystyle {\mathcal {O))(1)}$ inhomogeneous condition on the left provides us the reason to start the expansion at ${\displaystyle {\mathcal {O))(1)}$. The leading order solution is ${\displaystyle Y_{0}^{l}=2e^{-X}-1}$.
This with ${\displaystyle 1-1}$ van-Dyke matching gives ${\displaystyle \alpha =0}$.
Let us now calculate the solution on the right rescaling ${\displaystyle X=(1-x)/\varepsilon ,\;Y=y}$, then the differential equation to satisfy on the right is
${\displaystyle Y''+(1-2\varepsilon X+\varepsilon ^{2}X^{2})Y'-\varepsilon Y=\varepsilon ,\quad Y(1)=1}$,
and accordingly, we assume an expansion
${\displaystyle Y^{r}=Y_{0}^{r}+\varepsilon ^{1/2}Y_{1/2}^{r}+...}$.
The ${\displaystyle {\mathcal {O))(1)}$ inhomogeneous condition on the right provides us the reason to start the expansion at ${\displaystyle {\mathcal {O))(1)}$. The leading order solution is ${\displaystyle Y_{0}^{r}=(1-B)+Be^{-X))$. This with ${\displaystyle 1-1}$ van-Dyke matching gives ${\displaystyle B=2}$. Proceeding in a similar fashion if we calculate the higher order-corrections we get the solutions as
${\displaystyle Y^{l}=2e^{-X}-1+\varepsilon ^{1/2}e^{-X}({\frac {X^{3)){3))+{\frac {X^{2)){2))+{\frac {X}{2)))+{\mathcal {O))(\varepsilon )...,\quad X={\frac {x}{\varepsilon ^{1/2))))$.
${\displaystyle y\equiv -1}$.
${\displaystyle Y^{r}=2e^{-X}-1+2\varepsilon e^{-X}(X+X^{2})+{\mathcal {O))(\varepsilon ^{2})...,\quad X={\frac {1-x}{\varepsilon ))}$.
### Composite solution
To obtain our final, matched, composite solution, valid on the whole domain, one popular method is the uniform method. In this method, we add the inner and outer approximations and subtract their overlapping value, ${\displaystyle \,y_{\mathrm {overlap} ))$, which would otherwise be counted twice. The overlapping value is the outer limit of the inner boundary layer solution, and the inner limit of the outer solution; these limits were above found to equal ${\displaystyle e}$. Therefore, the final approximate solution to this boundary value problem is,
${\displaystyle y(t)=y_{\mathrm {I} }+y_{\mathrm {O} }-y_{\mathrm {overlap} }=e\left({1-e^{-t/\varepsilon ))\right)+e^{1-t}-e=e\left({e^{-t}-e^{-t/\varepsilon ))\right).\,}$
Note that this expression correctly reduces to the expressions for ${\displaystyle y_{\mathrm {I} ))$ and ${\displaystyle y_{\mathrm {O} ))$ when ${\displaystyle t}$ is O(${\displaystyle \varepsilon }$) and O(1), respectively.
### Accuracy
Convergence of approximations. Approximations and exact solutions, which are indistinguishable at this scale, are shown for various ${\displaystyle \varepsilon }$. The outer solution is also shown. Note that since the boundary layer becomes narrower with decreasing ${\displaystyle \varepsilon }$, the approximations converge to the outer solution pointwise, but not uniformly, almost everywhere.
This final solution satisfies the problem's original differential equation (shown by substituting it and its derivatives into the original equation). Also, the boundary conditions produced by this final solution match the values given in the problem, up to a constant multiple. This implies, due to the uniqueness of the solution, that the matched asymptotic solution is identical to the exact solution up to a constant multiple. This is not necessarily always the case, any remaining terms should go to zero uniformly as ${\displaystyle \varepsilon \rightarrow 0}$.
Not only does our solution successfully approximately solve the problem at hand, it closely approximates the problem's exact solution. It happens that this particular problem is easily found to have exact solution
${\displaystyle y(t)={\frac {e^{-t}-e^{-t/\varepsilon )){e^{-1}-e^{-1/\varepsilon ))},\,}$
which has the same form as the approximate solution, by the multiplying constant. The approximate solution is the first term in a binomial expansion of the exact solution in powers of ${\displaystyle e^{1-1/\varepsilon ))$.
### Location of boundary layer
Conveniently, we can see that the boundary layer, where ${\displaystyle y'}$ and ${\displaystyle y''}$ are large, is near ${\displaystyle t=0}$, as we supposed earlier. If we had supposed it to be at the other endpoint and proceeded by making the rescaling ${\displaystyle \tau =(1-t)/\varepsilon }$, we would have found it impossible to satisfy the resulting matching condition. For many problems, this kind of trial and error is the only way to determine the true location of the boundary layer.[2]
## Harder problems
The problem above is a simple example because it is a single equation with only one dependent variable, and there is one boundary layer in the solution. Harder problems may contain several co-dependent variables in a system of several equations, and/or with several boundary and/or interior layers in the solution.
It is often desirable to find more terms in the asymptotic expansions of both the outer and the inner solutions. The appropriate form of these expansions is not always clear: while a power-series expansion in ${\displaystyle \varepsilon }$ may work, sometimes the appropriate form involves fractional powers of ${\displaystyle \varepsilon }$, functions such as ${\displaystyle \varepsilon \log \varepsilon }$, et cetera. As in the above example, we will obtain outer and inner expansions with some coefficients which must be determined by matching.[6]
## Second-order differential equations
### Schrödinger-like second-order differential equations
A method of matched asymptotic expansions - with matching of solutions in the common domain of validity - has been developed and used extensively by Dingle and Müller-Kirsten for the derivation of asymptotic expansions of the solutions and characteristic numbers (band boundaries) of Schrödinger-like second-order differential equations with periodic potentials - in particular for the Mathieu equation[7] (best example), Lamé and ellipsoidal wave equations,[8] oblate[9] and prolate[10] spheroidal wave equations, and equations with anharmonic potentials.[11]
### Convection-diffusion equations
Methods of matched asymptotic expansions have been developed to find approximate solutions to the Smoluchowski convection-diffusion equation, which is a singularly perturbed second-order differential equation. The problem has been studied particularly in the context of colloid particles in linear flow fields, where the variable is given by the pair distribution function around a test particle. In the limit of low Péclet number, the convection-diffusion equation also presents a singularity at infinite distance (where normally the far-field boundary condition should be placed) due to the flow field being linear in the interparticle separation. This problem can be circumvented with a spatial Fourier transform as shown by Jan Dhont.[12] A different approach to solving this problem was developed by Alessio Zaccone and coworkers and consists in placing the boundary condition right at the boundary layer distance, upon assuming (in a first-order approximation) a constant value of the pair distribution function in the outer layer due to convection being dominant there. This leads to an approximate theory for the encounter rate of two interacting colloid particles in a linear flow field in good agreement with the full numerical solution.[13] When the Péclet number is significantly larger than one, the singularity at infinite separation no longer occurs and the method of matched asymptotics can be applied to construct the full solution for the pair distribution function across the entire domain.[14][15]
## References
1. ^ R.B. Dingle (1973), Asymptotic Expansions: Their Derivation and Interpretation, Academic Press.
2. ^ a b Verhulst, F. (2005). Methods and Applications of Singular Perturbations: Boundary Layers and Multiple Timescale Dynamics. Springer. ISBN 0-387-22966-3.
3. ^ Nayfeh, A. H. (2000). Perturbation Methods. Wiley Classics Library. Wiley-Interscience. ISBN 978-0-471-39917-9.
4. ^ Kevorkian, J.; Cole, J. D. (1996). Multiple Scale and Singular Perturbation Methods. Springer. ISBN 0-387-94202-5.
5. ^ Bender, C. M.; Orszag, S. A. (1999). Advanced Mathematical Methods for Scientists and Engineers. Springer. ISBN 978-0-387-98931-0.
6. ^ Hinch, John (1991). Perturbation Methods. Cambridge University Press.
7. ^ R.B. Dingle and H. J. W. Müller, J. reine angew. Math. 211 (1962) 11-32 and 216 (1964) 123-133; H.J.W. Müller, J. reine angew. Math. 211 (1962) 179-190.
8. ^ H.J.W. Müller, Mathematische Nachrichten 31 (1966) 89-101, 32 (1966) 49-62, 32 (1966) 157-172.
9. ^ H.J.W. Müller, J. reine angew. Math. 211 (1962) 33-47.
10. ^ H.J.W. Müller, J. reine angew. Math. 212 (1963) 26-48.
11. ^ H.J.W. Müller-Kirsten (2012), Introduction to Quantum Mechanics: Schrödinger Equation and Path Integral, 2nd ed., World Scientific, ISBN 978-9814397742. Chapter 18 on Anharmonic potentials.
12. ^ An Introduction to the Dynamics of Colloids by J. K. G. Dhont, google books link
13. ^ Zaccone, A.; Gentili, D.; Wu, H.; Morbidelli, M. (2009). "Theory of activated-rate processes under shear with application to shear-induced aggregation of colloids". Physical Review E. 80: 051404. doi:10.1103/PhysRevE.80.051404. hdl:2434/653702.
14. ^ Banetta, L.; Zaccone, A. (2019). "Radial distribution function of Lennard-Jones fluids in shear flows from intermediate asymptotics". Physical Review E. 99: 052606. arXiv:1901.05175. doi:10.1103/PhysRevE.99.052606.
15. ^ Banetta, L.; Zaccone, A. (2020). "Pair correlation function of charge-stabilized colloidal systems under sheared conditions". Colloid and Polymer Science. 298 (7): 761–771. doi:10.1007/s00396-020-04609-4. | 4,778 | 18,375 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 112, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2021-49 | latest | en | 0.956054 |
https://classroom.thenational.academy/lessons/comparing-numbers-65hk4c?activity=worksheet&step=3 | 1,620,365,529,000,000,000 | text/html | crawl-data/CC-MAIN-2021-21/segments/1620243988774.96/warc/CC-MAIN-20210507025943-20210507055943-00245.warc.gz | 170,790,162 | 38,073 | # Comparing numbers
In this lesson, you will comparing number using mathematical language
Quiz:
# Intro quiz - Recap from previous lesson
Before we start this lesson, let’s see what you can remember from this topic. Here’s a quick quiz!
## Question 3
Q1.Is there a difference of one or two between the different sets of cubes?
1/3
Q2.True or False. There is a difference of 1 between the rows of carrots.
2/3
Q3.True or false. There is a difference of 2 between these circled numbers.
3/3
Quiz:
# Intro quiz - Recap from previous lesson
Before we start this lesson, let’s see what you can remember from this topic. Here’s a quick quiz!
## Question 3
Q1.Is there a difference of one or two between the different sets of cubes?
1/3
Q2.True or False. There is a difference of 1 between the rows of carrots.
2/3
Q3.True or false. There is a difference of 2 between these circled numbers.
3/3
# Video
Click on the play button to start the video. If your teacher asks you to pause the video and look at the worksheet you should:
• Click "Close Video"
• Click "Next" to view the activity
Your video will re-appear on the next page, and will stay paused in the right place.
# Worksheet
These slides will take you through some tasks for the lesson. If you need to re-play the video, click the ‘Resume Video’ icon. If you are asked to add answers to the slides, first download or print out the worksheet. Once you have finished all the tasks, click ‘Next’ below.
Quiz:
# U9 L4 End of Lesson Quiz
## Question 3
Q1.What is the difference between 4 and 6?
1/3
Q2.True or False. There is a difference of 5 between 4 and 9.
2/3
Q3.What might the missing number be? Select one of the following.
3/3
Quiz:
# U9 L4 End of Lesson Quiz
## Question 3
Q1.What is the difference between 4 and 6?
1/3
Q2.True or False. There is a difference of 5 between 4 and 9.
2/3
Q3.What might the missing number be? Select one of the following.
3/3
# Lesson summary: Comparing numbers
### Time to move!
Did you know that exercise helps your concentration and ability to learn?
For 5 mins...
Move around:
Jog
On the spot:
Dance
### Take part in The Big Ask.
The Children's Commissioner for England wants to know what matters to young people. | 582 | 2,257 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.1875 | 4 | CC-MAIN-2021-21 | latest | en | 0.896798 |
www.targetnet.in | 1,569,058,965,000,000,000 | text/html | crawl-data/CC-MAIN-2019-39/segments/1568514574377.11/warc/CC-MAIN-20190921084226-20190921110226-00253.warc.gz | 1,034,629,567 | 28,464 | # TARGET-NET
## Important definitions of sets in Discrete Mathematics for ugc net comuter science
Sets:
SET: A set is an unordered collection of objects, called elements or members of the set. A set is said to contain its elements. We write a ∈ A to denote that a is an element of the set A. The notation a ∈ A denotes that a is not an element of the set A.
N = {0, 1, 2, 3, . . .}, the set of natural numbers
Z = {. . . , −2, −1, 0, 1, 2, . . .}, the set of integers
Z+ = {1, 2, 3, . . .}, the set of positive integers
Q = {p/q | p ∈ Z, q ∈ Z, and q = 0}, the set of rational numbers
R, the set of real numbers
R+, the set of positive real numbers
C, the set of complex numbers
Two sets are equal if and only if they have the same elements. Therefore, if A and B are sets,
then A and B are equal if and only if ∀x(x ∈ A ↔ x ∈ B). We write A = B if A and B are
equal sets.
There is a special set that has no elements. This set is called the empty set,
or null set, and is denoted by ∅. The empty set can also be denoted by { }.
The set A is a subset of B if and only if every element of A is also an element of B. We use
the notation A ⊆ B to indicate that A is a subset of the set B
For every set S, (i ) ∅ ⊆ S and (ii ) S ⊆ S
Two sets A and B are equal, if A ⊆ B and B ⊆ A
Let S be a set. If there are exactly n distinct elements in S where n is a nonnegative integer,
we say that S is a finite set and that n is the cardinality of S. The cardinality of S is denoted
by |S|
A set is said to be infinite if it is not finite.
Given a set S, the power set of S is the set of all subsets of the set S. The power set of S is
denoted by P(S).
Let A and B be sets. The Cartesian product of A and B, denoted by A × B, is the set of all
ordered pairs (a, b), where a ∈ A and b ∈ B. Hence,A × B = {(a, b) | a ∈ A ∧ b ∈ B}.
Let A and B be sets. The union of the sets A and B, denoted by A ∪ B, is the set that contains
those elements that are either in A or in B, or in both.
An element x belongs to the union of the sets A and B if and only if x belongs to A or x belongs
to B. This tells us that A ∪ B = {x | x ∈ A ∨ x ∈ B}.
Let A and B be sets. The intersection of the sets A and B, denoted by A ∩ B, is the set
containing those elements in both A and B.
An element x belongs to the intersection of the sets A and B if and only if x belongs to A and
x belongs to B. This tells us that A ∩ B = {x | x ∈ A ∧ x ∈ B}.
Two sets are called disjoint if their intersection is the empty set. Let A and B be sets. The difference of A and B, denoted by A − B, is the set containing those elements that are in A but not in B. The difference of A and B is also called the complement of B with respect to A.
An element x belongs to the difference of A and B if and only if x ∈ A and x / ∈ B. This tells us
that A − B = {x | x ∈ A ∧ x / ∈ B}.
Let U be the universal set. The complement of the set A, denoted by A, is the complement
of A with respect to U. Therefore, the complement of the set A is U − A.
Set Identities. Identity Name
A ∩ U = A Identity laws
A ∪ ∅ = A
A ∪ U = U Domination laws
A ∩ ∅ = ∅
A ∪ A = A Idempotent laws
A ∩ A = A
(A) = A Complementation law
A ∪ B = B ∪ A Commutative laws
A ∩ B = B ∩ A
A ∪ (B ∪ C) = (A ∪ B) ∪ C Associative laws
A ∩ (B ∩ C) = (A ∩ B) ∩ C
A ∪ (B ∩ C) = (A ∪ B) ∩ (A ∪ C) Distributive laws
A ∩ (B ∪ C) = (A ∩ B) ∪ (A ∩ C)
A ∩ B = A ∪ B De Morgan’s laws
A ∪ B = A ∩ B
A ∪ (A ∩ B) = A Absorption laws
A ∩ (A ∪ B) = A
A ∪ A = U Complement laws
A ∩ A = ∅
1. nice information.thank u for providing us.
2. http://ugcpoint.in
Custom Search | 1,185 | 3,589 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.21875 | 4 | CC-MAIN-2019-39 | longest | en | 0.939596 |
https://pt.scribd.com/document/59520674/catalogo-para-selecao-de-engrenagens | 1,566,159,475,000,000,000 | text/html | crawl-data/CC-MAIN-2019-35/segments/1566027313996.39/warc/CC-MAIN-20190818185421-20190818211421-00024.warc.gz | 613,621,686 | 73,583 | Você está na página 1de 10
# Tipos de Engrenagens
A=
B=
D=
L=
t=
Furo Piloto
Furo Mximo Recomendvel
Dimetro do Cubo
Comprimento
Exemplo:
Engrenagem
2.60.35 BT5
2 = duas carreiras
60 = passo 19.05 (3/4)
35 = n de dentes
B = Norma DIN
T5 = tipo 5
ASA = A
DIN = B
Tipo 1
Tipo 2
L
Tipo 3
L
L
t
Tipo 4
B A
Tipo 5
L
NMERO DE CARREIRAS
Simples ...............
Dupla .................
Tripla ..................
Qudrupla ..........
=1
=2
=3
=4
9,525
12,700
15,875
19,050
25,400
31,750
38,100
35
40
50
60
80
100
120
PASSOS
B
B A
=
=
=
=
=
=
=
## Engrenagens Passo 9,525mm - 3/8
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
4,77
5,08
Engrenagens:
5,72
6,35
Engrenagens:
14,4
4,3
e p
A B
B A
N. de
dentes
Z
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
5,2
p e
e p
Dimetro Dimetro
Primitivo Externo
34,0
37,0
40,0
44,0
47,0
50,0
53,0
56,0
59,0
62,0
65,0
68,0
71,0
74,0
77,0
80,0
84,0
87,0
90,0
93,0
96,0
99,0
102,0
105,0
108,0
111,0
114,0
117,0
120,0
123,0
126,0
129,0
132,0
135,0
138,0
141,0
144,0
147,0
150,0
154,0
12
12
12
12
12
12
12
12
12
12
12
12
12
12
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
15
20
20
20
20
20
13
15
17
19
21
23
25
27
29
31
33
35
37
39
41
43
45
47
48
50
52
54
56
58
60
62
64
67
69
71
73
75
77
80
82
82
82
82
87
87
20
23
26
29
32
35
38
41
44
47
50
53
56
59
62
65
68
70
72
75
78
80
84
87
90
93
96
100
103
107
110
113
116
120
123
126
126
126
130
130
Dupla
L Tipos L Tipos
16
16
16
16
16
20
20
20
20
22
22
22
22
22
22
22
22
22
22
22
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
5,2
A B
p e
B A
Simples
p e A B D
30,82
33,81
36,80
39,80
42,81
45,81
48,82
51,84
54,85
57,87
60,89
63,91
66,93
69,95
72,97
76,00
79,02
82,05
85,07
88,10
91,12
94,15
97,18
100,20
103,23
106,26
109,29
112,32
115,34
118,37
121,40
124,43
127,46
130,49
133,52
136,55
139,58
142,61
145,63
148,66
15,4
4,3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
22
22
22
22
22
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
N. de
dentes
Z
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
083
084
085
090
095
100
114
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
151,69
154,72
157,75
160,78
163,81
166,84
169,88
172,91
175,94
178,97
182,00
185,03
188,06
191,09
194,12
197,15
200,18
203,21
206,24
209,27
212,30
215,33
218,37
221,40
224,43
227,46
230,49
233,52
236,55
239,58
242,61
245,65
248,68
251,71
254,74
257,77
272,93
288,08
303,24
345,68
157,0
160,0
163,0
166,0
169,0
172,0
175,0
178,0
181,0
184,0
187,0
190,0
193,0
196,0
199,0
202,0
205,0
208,0
211,0
214,0
217,0
220,0
223,0
226,0
229,0
232,0
235,0
238,0
241,0
244,0
247,0
250,0
254,0
257,0
260,0
263,0
278,0
293,0
308,0
351,0
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
90
90
90
90
48
48
48
48
48
48
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
57
65
135
135
135
135
72
72
72
72
72
72
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
98
Dupla
L Tipos L Tipos
25
25
25
25
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
30
30
30
30
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
## Engrenagens Passo 12,70mm - 1/ 2
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
7,95
7,94
Engrenagens:
7,75
8,51
Engrenagens:
20,9
21,6
7,2
e p
A B
B A
N. de
dentes
Z
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
035
036
037
038
039
040
041
042
043
044
045
046
047
7,0
p e
e p
Simples
p e A B D
42,0
46,0
50,0
54,0
58,0
62,0
66,0
71,0
75,0
79,0
83,0
87,0
91,0
95,0
99,0
103,0
107,0
112,0
116,0
120,0
124,0
128,0
132,0
136,0
140,0
144,0
148,0
148,0
152,0
156,0
160,0
164,0
168,0
172,0
176,0
180,0
184,0
188,0
192,0
196,0
12
12
12
12
12
12
12
17
17
17
17
17
17
17
17
17
17
17
17
17
17
17
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
15
17
20
23
26
29
31
34
37
39
42
45
47
50
53
55
58
61
63
63
63
67
67
67
67
67
57
67
57
57
57
57
57
57
57
57
57
57
57
57
22
26
30
34
39
43
47
51
55
59
63
67
71
75
79
83
87
92
95
95
95
100
100
100
100
100
86
100
86
86
86
86
86
86
86
86
86
86
86
86
Dupla
L Tipos L Tipos
25
25
25
25
25
25
25
25
25
25
25
25
25
30
30
30
30
30
30
30
30
30
30
30
30
30
32
32
32
32
32
36
36
36
36
36
36
36
36
7,0
A B
p e
B A
Dimetro Dimetro
Primitivo Externo
37,13
41,10
45,08
49,07
53,07
57,07
61,08
65,10
69,12
73,14
77,16
81,18
85,21
89,24
93,27
97,30
101,33
105,36
109,40
113,43
117,46
121,50
125,53
129,57
133,61
137,64
141,68
141,68
145,72
149,75
153,79
157,83
161,87
165,91
169,94
173,98
178,02
182,06
186,10
190,14
7,2
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
30
30
30
30
30
30
30
30
30
30
30
30
30
36
36
36
36
36
36
36
36
36
36
36
36
36
36
41
41
41
41
46
46
46
46
46
46
46
46
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
N. de
dentes
Z
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
077
078
079
080
081
082
085
090
095
100
114
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
194,18
198,22
202,26
206,30
210,34
214,38
218,42
222,46
226,50
230,54
234,58
238,62
242,66
246,70
250,74
254,78
258,83
262,87
266,91
270,95
274,99
279,03
283,07
287,11
291,15
295,20
299,24
303,28
307,32
311,36
315,40
319,40
323,49
327,53
331,57
343,69
363,90
384,11
404,32
460,91
200,0
204,0
208,0
213,0
217,0
221,0
225,0
229,0
233,0
237,0
241,0
245,0
249,0
253,0
257,0
261,0
265,0
269,0
273,0
277,0
281,0
285,0
289,0
293,0
297,0
301,0
305,0
310,0
314,0
318,0
322,0
326,0
330,0
334,0
338,0
350,0
370,0
390,0
411,0
467,0
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
57
57
57
57
57
57
57
57
57
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
73
86
86
86
86
86
86
86
86
86
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
110
Dupla
L Tipos L Tipos
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
36
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
46
46
46
46
46
46
46
46
46
46
46
46
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
## Engrenagens Passo 15,875mm - 5/8
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
9,53
10,16
Engrenagens:
9,65
10,16
Engrenagens:
25,3
26,7
8,6
e p
A B
B A
N. de
dentes
Z
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
028
029
029
030
030
031
031
032
032
033
033
034
035
036
037
038
039
040
041
8,7
p e
e p
Simples
p e A B D
47,0
52,0
57,0
62,0
67,0
73,0
78,0
83,0
88,0
93,0
98,0
103,0
108,0
113,0
118,0
123,0
128,0
133,0
139,0
144,0
149,0
149,0
154,0
154,0
159,0
159,0
164,0
164,0
169,0
169,0
175,0
175,0
180,0
185,0
190,0
195,0
200,0
205,0
210,0
215,0
12
15
15
15
15
15
17
17
17
17
17
17
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
20
25
25
25
25
14
19
22
25
29
32
35
39
43
46
49
53
56
59
63
66
67
67
73
73
57
73
57
73
57
73
57
73
57
73
57
73
57
57
57
57
57
57
57
57
21
28
33
38
44
48
53
59
64
69
74
79
84
89
94
99
100
100
110
110
86
110
86
110
86
110
86
110
86
110
86
110
86
86
86
86
86
86
86
86
Dupla
L Tipos L Tipos
25
25
25
25
25
25
25
25
25
25
25
30
30
30
30
30
34
34
34
34
37
37
37
37
37
37
37
37
37
37
37
37
40
40
8,7
A B
p e
B A
Dimetro Dimetro
Primitivo Externo
41,48
46,42
51,37
56,35
61,34
66,34
71,34
76,36
81,37
86,40
91,42
96,45
101,48
106,51
111,55
116,59
121,62
126,66
131,70
136,74
141,79
141,79
146,83
146,83
151,87
151,87
156,92
156,92
161,96
161,96
167,01
167,01
172,05
177,10
182,14
187,19
192,24
197,29
202,33
207,38
8,6
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
51
51
51
51
51
51
51
51
51
51
51
51
55
55
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
N. de
dentes
Z
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
080
085
090
095
114
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
212,43
217,48
222,53
227,58
232,63
237,68
242,73
247,77
252,82
257,87
262,92
267,97
273,03
278,08
283,13
288,18
293,23
298,28
303,33
308,33
313,43
318,48
323,53
328,58
333,63
338,69
343,74
348,79
353,84
358,89
363,94
368,99
374,05
379,10
384,15
404,36
429,62
454,88
480,14
576,13
220,0
225,0
230,0
235,0
240,0
245,0
250,0
255,0
260,0
265,0
270,0
275,0
280,0
286,0
291,0
296,0
301,0
306,0
311,0
316,0
321,0
326,0
331,0
336,0
341,0
346,0
351,0
356,0
361,0
366,0
371,0
376,0
382,0
387,0
392,0
412,0
437,0
462,0
488,0
584,0
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
30
30
30
30
30
30
57
57
57
57
57
57
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
73
73
86
86
86
86
86
86
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
98
110
110
Dupla
L Tipos L Tipos
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
## Engrenagens Passo 19,05mm - 3/4
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
12,70
11,90
Engrenagens:
11,68
12,07
Engrenagens:
34,2
11,5
e p
A B
Z
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
024
025
025
026
026
027
027
028
029
030
031
032
033
034
035
036
037
038
038
039
039
040
040
56,0
62,0
68,0
74,0
80,0
87,0
93,0
99,0
105,0
111,0
117,0
123,0
129,0
135,0
141,0
147,0
153,0
153,0
160,0
160,0
167,0
167,0
173,0
173,0
179,0
185,0
191,0
197,0
203,0
209,0
215,0
221,0
227,0
233,0
239,0
239,0
246,0
246,0
252,0
252,0
15
15
15
20
20
20
20
20
20
20
20
20
20
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
25
30
30
30
30
30
30
17
22
26
31
35
39
43
47
51
56
60
64
67
71
75
75
65
77
65
77
65
80
65
80
65
65
65
65
65
65
65
65
65
65
65
73
65
73
65
73
26
33
39
46
52
58
64
70
77
83
90
96
100
107
113
113
98
115
98
115
98
120
98
120
98
98
98
98
98
98
98
98
98
98
98
110
98
110
98
110
Dupla
L Tipos L Tipos
30
30
30
30
30
30
30
30
30
30
30
36
36
36
36
36
39
39
39
39
39
39
39
39
39
39
39
39
39
39
45
45
45
-
A B
p e
B A
Simples
p e A B D
10,5
e p
Dimetro Dimetro
Primitivo Externo
49,78
55,70
61,65
67,62
73,60
79,60
85,61
91,63
97,65
103,67
109,71
115,74
121,78
127,82
133,86
139,90
145,95
145,95
151,99
151,99
158,04
158,04
164,09
164,09
170,14
176,19
182,25
188,30
194,35
200,41
206,46
212,52
218,57
224,63
230,69
230,69
236,74
236,74
242,80
242,80
10,5
p e
B A
N. de
dentes
30,0
11,5
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
-
45
45
45
45
45
45
45
45
45
45
45
50
50
50
50
50
58
58
58
58
58
58
58
58
58
58
58
58
58
58
65
65
65
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
N. de
dentes
Z
041
041
042
042
043
043
044
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
076
080
085
090
095
114
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
248,86
248,86
254,96
254,96
260,98
260,98
267,03
267,03
273,09
279,15
285,21
291,27
297,33
303,39
309,45
315,51
321,57
327,63
333,69
339,75
345,81
351,87
357,93
363,99
370,05
376,12
382,18
388,24
394,30
400,36
406,42
412,48
418,55
424,61
460,98
485,23
515,54
545,85
576,16
691,36
258,0
258,0
264,0
264,0
270,0
270,0
276,0
276,0
282,0
288,0
294,0
300,0
306,0
312,0
318,0
324,0
330,0
336,0
342,0
348,0
355,0
361,0
367,0
373,0
379,0
385,0
391,0
397,0
403,0
409,0
415,0
421,0
427,0
433,0
470,0
494,0
524,0
555,0
585,0
700,0
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
65
73
65
73
65
73
65
73
73
73
73
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
83
90
100
98
110
98
110
98
110
98
110
110
110
110
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
124
136
150
Dupla
L Tipos L Tipos
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
65
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
## Engrenagens Passo 25,40mm - 1
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
15,88
15,87
Engrenagens:
17,02
15,88
Engrenagens:
47,2
43,6
14,3
e p
A B
Z
008
009
010
011
012
013
014
015
016
017
018
018
019
019
020
020
021
021
022
022
023
023
024
024
025
025
026
026
027
028
029
030
031
032
033
034
035
036
037
038
Simples
p e A B D
66,37
74,26
82,20
90,16
98,14
106,14
114,15
122,17
130,20
138,23
146,27
146,27
154,32
154,32
162,37
162,37
170,42
170,42
178,48
178,48
186,54
186,54
194,60
194,60
202,66
202,66
210,72
210,72
218,79
226,86
234,93
243,00
251,07
259,14
267,21
275,28
283,36
291,43
299,51
307,58
75,0
83,0
91,0
99,0
106,0
116,0
124,0
132,0
140,0
148,0
156,0
156,0
164,0
164,0
172,0
172,0
180,0
180,0
188,0
188,0
197,0
197,0
205,0
205,0
213,0
213,0
222,0
222,0
230,0
238,0
246,0
255,0
263,0
271,0
279,0
287,0
295,0
303,0
311,0
319,0
20
20
20
20
20
20
25
25
25
25
25
25
25
25
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
24
30
35
40
45
50
56
61
67
73
65
77
65
77
65
80
65
80
73
80
73
80
73
83
73
83
73
83
73
73
73
73
73
73
73
73
73
73
73
73
36
45
53
61
68
76
84
92
100
110
98
115
98
115
98
120
98
120
110
120
110
120
110
125
110
125
110
125
110
110
110
110
110
110
110
110
110
110
110
110
Dupla
L Tipos L Tipos
35
35
35
35
35
35
35
35
35
35
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
45
A B
p e
B A
Dimetro Dimetro
Primitivo Externo
15,3
e p
p e
B A
N. de
dentes
15,3
14,3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
60
60
60
60
60
60
60
60
60
60
64
64
64
64
64
64
66
66
66
68
68
68
72
72
72
72
72
72
72
72
72
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
N. de
dentes
Z
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
062
063
064
065
066
067
068
069
070
071
072
073
074
075
076
090
095
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
315,66
323,73
331,81
339,89
347,97
356,04
364,12
372,20
380,28
388,36
396,44
404,52
412,60
420,68
428,76
436,84
444,92
453,00
461,08
469,16
477,24
485,32
493,41
501,49
509,57
517,65
525,73
533,82
541,90
549,98
558,06
566,14
574,23
582,31
590,39
598,47
606,56
614,64
727,80
768,22
327,0
335,0
343,0
351,0
360,0
368,0
376,0
384,0
392,0
400,0
408,0
416,0
424,0
432,0
440,0
448,0
456,0
465,0
473,0
481,0
489,0
497,0
505,0
513,0
521,0
529,0
537,0
545,0
553,0
562,0
570,0
578,0
586,0
594,0
602,0
610,0
618,0
626,0
739,0
780,0
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
30
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
73
73
73
73
73
73
73
73
73
73
73
73
73
73
73
73
73
73
83
83
83
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
100
110
110
110
110
110
110
110
110
110
110
110
110
110
110
110
110
110
110
124
124
124
136
136
136
136
136
136
136
136
136
136
136
136
136
136
136
136
136
136
150
Dupla
L Tipos L Tipos
45
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
50
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
72
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
## Engrenagens Passo 31,75mm - 1 1/4
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
19,05
19,05
Engrenagens:
19,56
19,05
Engrenagens:
53,0
17,2
e p
A B
Z
008
009
010
011
012
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
032
033
034
035
036
037
93,0
103,0
113,0
123,0
133,0
145,0
155,0
165,0
175,0
185,0
195,0
205,0
215,0
225,0
235,0
245,0
255,0
265,0
277,0
287,0
297,0
307,0
317,0
328,0
338,0
348,0
358,0
368,0
378,0
388,0
20
20
25
25
25
30
30
30
30
30
30
30
30
30
30
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
30
36
44
51
58
65
71
77
65
65
73
73
73
73
73
73
73
73
73
73
83
83
83
83
83
83
83
83
83
83
45
55
66
77
87
98
106
116
98
98
110
110
110
110
110
110
110
110
110
110
124
124
124
124
124
124
124
124
124
124
Dupla
L Tipos L Tipos
35
35
40
40
40
40
40
40
45
45
45
45
45
45
45
50
50
50
50
50
50
50
56
56
56
56
56
56
56
56
A B
p e
B A
Simples
p e A B D
17,6
e p
Dimetro Dimetro
Primitivo Externo
82,96
92,83
102,74
112,70
122,67
132,67
142,68
152,71
162,74
172,79
182,84
192,90
202,96
213,03
223,10
233,17
243,25
253,33
263,40
273,49
283,57
293,66
303,75
313,83
323,93
334,01
344,10
354,20
364,29
374,38
17,6
p e
B A
N. de
dentes
54,1
17,2
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
60
60
70
70
70
70
70
70
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
75
2-3
2-3
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
N. de
dentes
Z
038
039
040
041
042
043
044
045
046
047
048
049
050
051
052
053
054
055
056
057
058
059
060
061
076
-
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
384,48
394,57
404,67
414,76
424,86
434,96
445,06
455,15
465,25
475,35
485,45
495,55
505,65
515,75
525,85
535,95
546,05
556,15
566,25
576,35
586,45
596,55
606,66
616,76
768,30
-
398,0
408,0
418,0
429,0
439,0
449,0
459,0
469,0
479,0
489,0
499,0
509,0
519,0
530,0
540,0
550,0
560,0
570,0
580,0
590,0
600,0
610,0
620,0
630,0
782,0
-
35
35
35
35
35
35
35
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
-
83
83
83
83
83
83
83
83
83
83
90
90
90
90
90
90
90
90
90
90
90
90
90
90
100
-
124
124
124
124
124
124
124
124
124
124
136
136
136
136
136
136
136
136
136
136
136
136
136
136
150
-
Dupla
L Tipos L Tipos
56
56
56
56
56
56
56
56
56
56
56
56
56
56
56
56
65
65
65
65
65
65
65
65
65
-
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
-
75
75
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
80
-
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
-
## Engrenagens Passo 38,10mm - 1 1/2
Norma ASA
Corrente:
Largura interna:
Dimetro do rolo:
Norma DIN
Corrente:
Largura interna:
Dimetro do rolo:
25,40
22,22
Engrenagens:
25,40
25,40
Engrenagens:
71,3
68,3
22,9
e p
A B
Z
008
009
010
011
012
013
013
014
015
016
017
018
019
020
021
022
023
024
025
026
027
028
029
030
031
Simples
p e A B D
99,56
111,40
123,29
135,23
147,21
159,20
159,20
171,22
183,25
195,29
207,35
219,41
231,48
243,55
255,63
267,72
279,80
291,90
303,99
316,09
328,19
340,29
352,39
364,49
376,60
113,0
125,0
136,0
148,0
160,0
175,0
175,0
187,0
199,0
211,0
223,0
235,0
247,0
259,0
271,0
283,0
295,0
308,0
320,0
334,0
346,0
358,0
370,0
383,0
395,0
25
25
30
35
35
35
35
35
35
35
35
35
35
35
35
35
35
35
40
40
40
40
40
40
40
35
43
52
61
69
65
77
65
73
73
73
73
73
73
73
83
83
83
83
83
83
83
83
83
83
53
65
78
91
103
98
115
98
110
110
110
110
110
110
110
124
124
124
124
124
124
124
124
124
124
Dupla
L Tipos L Tipos
45
45
50
50
50
50
50
50
50
55
55
55
55
55
55
55
55
55
55
55
55
55
55
55
A B
p e
B A
Dimetro Dimetro
Primitivo Externo
22,9
e p
p e
B A
N. de
dentes
22,9
22,9
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
80
80
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
2-3
2-3
2-3
2-3
2-3
2-3
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
N. de
dentes
Z
032
033
034
035
036
037
038
039
040
041
042
043
044
045
046
047
048
049
050
054
057
060
-
Dimetro Dimetro
Primitivo Externo
Simples
p e A B D
388,71
400,82
412,93
425,03
437,15
449,26
461,37
473,49
485,60
497,72
509,83
521,95
534,07
546,16
558,30
570,42
582,54
594,66
606,78
655,26
691,62
727,99
-
407,0
419,0
431,0
443,0
455,0
467,0
479,0
492,0
504,0
516,0
528,0
540,0
552,0
564,0
576,0
589,0
601,0
613,0
625,0
673,0
710,0
746,0
-
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
40
-
83
83
83
83
83
83
90
90
90
90
90
90
90
90
90
90
90
90
90
90
90
100
-
124
124
124
124
124
124
136
136
136
136
136
136
136
136
136
136
136
136
136
136
136
150
-
Dupla
L Tipos L Tipos
55
55
55
55
55
55
55
55
60
60
60
60
60
60
60
60
60
60
60
60
60
60
-
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
-
90
90
90
90
90
90
95
95
95
95
95
95
95
95
95
95
95
95
95
95
95
95
-
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
4-5
-
## Tolerncias ISO para furos
H 7 - Deslizante
G 7 - Semi-rotativo
F 7 - Rotativo
E 7 - Rotativo leve
ISO h7 (eixo)
em mm
at 3
G6
H6
J6
K6
M6
N6
E7
F7
G7
H7
J7
K7
M7
N7
P7
+3
-4
-7
-11
+14
+7
+3
-6
-9
-13
-16
+10
+7
+3
-4
+23
+16
+12
+9
+3
-4
-7
acima de 3
+4
-4
-9
-13
+20
+10
+4
-7
-12
-16
-20
at 6
+12
+8
+4
-1
-5
+32
+22
+16
+12
+5
-4
-8
acima de 6
+5
-4
-7
-12
-16
+25
+13
+5
-7
-10
-15
-19
-24
at 10
+14
+9
+5
+2
-3
-7
+40
+28
+20
+15
+8
+5
-4
-9
acima de 10
+6
-5
-9
-15
-20
+32
+16
+6
-8
-12
-18
-23
-29
at 18
+17
+11
+6
+2
-4
-9
+50
+34
+24
+18
+10
+6
-5
-11
acima de 18
+7
-5
-11
-17
-24
+40
+20
+7
-9
-15
-21
-28
-35
at 30
+20
+13
+8
+2
-4
-11
+61
+41
+28
+21
+12
+6
-7
-14
acima de 30
+9
-6
-13
-20
-28
+50
+25
+9
-11
-18
-25
-33
-42
at 50
+25
+16
+10
+3
-4
-12
+75
+50
+34
+25
+14
+7
-8
-17
acima de 50
+10
-6
-15
-24
-33
+60
+30
+10
-12
-21
-30
-39
-51
at 80
+29
+19
+13
+4
-5
-14
+90
+60
+40
+30
+18
+9
-9
-21
acima de 80
+12
-6
-18
-28
-38
+72
+36
+12
-13
-25
-35
-45
-59
at 120
+34
+22
+16
+4
-6
-16
+107
+71
+47
+35
+22
+10
-10
-24
acima de 120
+14
-7
-21
-33
-45
+85
+43
+14
-14
-28
-40
-52
-68
at 180
+39
+25
+18
+4
-8
-20
+125
+83
+54
+40
+26
+12
-12
-28
em mm
at 3
acima de 3
at 6
acima de 6
at 10
acima de 10
at 18
acima de 18
at 30
acima de 30
at 50
acima de 50
at 80
acima de 80
at 120
D8
E8
+20 +14
F8
H8
J8
+7
-7
K8
M8
N8
D9
E9
-15
+20 +14
H9
J9
D 10
H 10
J 10
D 11
H 11
J 11
-13
+20
-20
+20
-30
## +34 +28 +21 +14
+7
-1
+45 +39 +25 +12 +60 +40 +20 +80 +60 +30
-9
-20
+30 +20
-15
+30
-24
+30
-38
## +48 +38 +28 +18
+9
-2
+60 +50 +30 +15 +78 +48 +24 +105 +75 +37
-10
-16
-21
-25
+40 +25
## +62 +47 +35 +22 +12
+6
+1
-3
+76 +61 +36 +18 +98 +58 +29 +130 +90 +45
## +50 +32 +16
0
0
+50 +32
-3
+93 +75 +43 +21 +120 +70 +35 +160 +110 +55
## +65 +40 +20
-23
-29
-36
+65 +40
+4
-3
+117 +92 +52 +26 +149 +84 +42 +195 +130 +65
## +80 +50 +25
-34
-42
+80 +50
+5
-3
+142 +112 +62 +31 +180 +100 +50 +240 +160 +80
-32
-41
+5
-4
-38
-48
+6
-4
-43
-55
+8
-4
0
0
-18
-20
-22
-31
+80
-37 +100
0
0
-50
+80
-60 +100
-65
-27
0
-15
+65
-55
0
-42
-45
-30
+50
+2
+65
-35
+40
-25
-26
-29
+8
+50
-19
-13
-22
+40
-12
-18
## acima de 120 +145 +85 +43
at 180
-80
-95
+174 +134 +74 +37 +220 +120 +60 +290 +190 +95
0
-44 +120
-70 +120
-110
+207 +159 +87 +43 +260 +140 +70 +340 +220 +110
0
-50 +145
-80 +145
-125
+245 +185 +100 +50 +305 +160 +80 +395 +250 +125
Tolerncias em (micron)
b
h
t
Dimetro do Eixo
Largura
Altura
Engrenagem
10 - 12
2,5
1,7
13 - 17
2,2
18 - 22
3,5
2,7
23 - 30
3,2
31 - 38
10
4,5
3,7
39 - 44
12
4,5
3,7
45 - 50
14
4,2
51 - 58
16
10
5,2
59 - 68
18
11
5,3
69 - 78
20
12
6,3
79 - 92
24
14
7,3
93 - 110
28
16
8,3
111 - 130
32
18
9,3
131 - 150
36
20
10
10,3 | 17,282 | 26,875 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2019-35 | latest | en | 0.220494 |
https://www.cardschat.com/poker/strategy/odds/pot-odds-implied/ | 1,725,751,374,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700650926.21/warc/CC-MAIN-20240907225010-20240908015010-00331.warc.gz | 670,267,587 | 32,498 | # Poker Pot Odds & Implied Odds
• Reviewed by WSOP Winner Chris ‘Fox’ Wallace
## An Introduction to Pot Odds
When you bet (or call a bet) you are, of course, trying to win the chips that are already in the pot. How often do you have to win to make this profitable? Clearly not every time – if it costs you 10 to call and there is 100 in the pot, then you’d be able to lose 9 times out of 10 and still break even.
This is the essence of pot odds: you’re paying a fraction to win a larger sum. If you’re more likely to win than you have to pay, then your bet/call is a winning move in the long run.
Or, put another way, if the probability ratio of losing to winning is lower than the ratio of chips in the pot to chips you must put in, it’s a sound play.
Let’s try one of the standard examples for pot odds in poker:
### The Flush Draw
First you need to consider your poker odds of hitting the winning hand. In the case of a flush draw on the turn in Texas Hold’em, you’re getting about 4-1 (actually 37-9, since there are 37 cards that will ‘miss’ you, and nine that will give you the flush, but 4-1 is a close enough approximation) that the flush will be the best hand.
This means that the pot odds need to be 4-1 or longer in order to make your draw profitable. For instance, if your pot odds are shorter, let’s say 3-1 (e.g. 30 in the pot to be won, with 10 to call), you would get this expected value calculation:
(-\$10 x 37/46) + (\$30 x 9/46) = -\$8.04 + \$5.86 = -\$2.17
What does this mean?
It means that, with the hand above, if there’s only \$30 in the pot and you have to pay \$10 to win it, you’ll lose on average a little over \$2 every time you do it. Not a good thing.
What if the pot was \$50?
(-\$10 x 37/46) + (\$50 x 9/46) = -\$8.04 + \$9.78 = \$1.74
Here, you average over \$1 profit for every call you make.
Understanding the concept of pot odds is essential in order to play winning poker. Poker – especially limit poker – is taking a relatively small edge and repeating it relentlessly, over and over again, and making a profit from it. Making plays that don’t pay off in the long run will instead turn that profit into a loss.
Having said that, let’s look at that first calculation again. Is it really a \$2.17 loss? Always? Well, that depends a whole lot on what happens after you actually hit the flush. And this moves us into the next concept: poker and implied odds.
## Implied Odds
Where ‘pot odds’ takes into consideration the money that’s in the pot at the time, ‘implied odds’ is an estimation of how much money you CAN win if you hit one of your outs. For instance, with 100 in the pot, and a bet of 20, is your gain really only 100 if you win? Could you not squeeze out an extra few bucks from your opponent if you hit your flush? You probably can – and so as the pot will get bigger, your implied odds go up.
A good example of when implied odds in poker come into play is when you limp in with a small or medium pair before the flop in hold ’em. Your chance of hitting a set (which is typically the only way a small or medium pair will win) is around 7.5-1, which means the pot needs to have 6 or 7 other limpers to make it worthwhile.
But, of course, that’s presuming that everyone will fold if you hit your set, which is rarely the case. Let’s say instead that you get four other limpers and your bets will narrow the field down by 50% on the flop, and another 50% on the turn – what are your implied odds? We’ll use limit poker for this example, so the figures relate to the number of small bets you can win.
• FOUR LIMPERS TO THE FLOP = 4 small bets
• TWO CALLERS TO THE TURN = 2 small bets
• ONE CALLER TO THE RIVER = 1 big bet (2 small bets)
Here, you stand to win 8 small bets, for the initial price of 1. By this count, your implied odds are good to make this pre-flop call with a weak pair because of the money you’ll figure to win if you do hit your set, rather than the amount you’re ‘guaranteed’ to win.
Here’s the downside to implied odds in poker though: They’re an estimation, and as it so happens, people tend to be way too optimistic in calculating them. You have to really consider whether your opponents will contribute more chips to the pot if you do hit your winning card.
### Implied Odds – Will They Pay You Off If You Hit?
Let’s say you have:
On a board of:
This gives you 9 outs to a flush, which is a 4-1 shot – just like the flush draw situation at the top of this page. Now let’s say that there were only two of you in the pot, one limper and you in the big blind.
The flop was checked around and your opponent bet at the turn after you checked, so the pot is around 2 big bets. You’re getting pot odds of 2-1 to see the last card, which could give you the nut flush – but do you call? Pot odds say no – the pots odds (2-1) are shorter than your odds of making the winning hand (4-1). Implied odds likely don’t give you the numbers you’re looking for either, but this is where people can get overly optimistic!
If your opponent paired their ace and has no hearts, would they really bet into a four-suited board after the river? Would they call your bet? Probably not. You can hardly figure to win more than the money that’s already in the pot at the turn, because if you make your hand on the river, they’re not going to ‘pay you off’.
Even if they call an extra bet on the river (maybe they have the J), you’re still not getting good enough odds. At that point, your call on the turn will have cost you 1 big bet (BB), and you’re looking at a profit of 3BBs, which gives you pot odds of 3-1. You’d have to successfully check-raise them (and they’d have to call your check-raise) for it to be near profitable, and you’d have to succeed at that every time you hit your flush. Hardly likely.
Some players may think this is mathematical mumbo-jumbo and has no place in a gambler’s heart, but this is really the principle that separates winning players from losing players: being able to tell a profitable bet from a non-profitable one. In the example above, there’s a non-profitable bet being offered. Don’t take it. Learn your poker pot odds and implied odds thoroughly so you know which is the right choice.
Improve your game by checking out even more poker strategies and guides or bring it back to basics with the poker rules for other poker variants.
LET’S TALK POKER!
Join over 366,493 other players who have become members of the cardschat forum to improve your game and participate in exclusive events with other members.
## FAQ
### What are Implied Odds in poker?
Implied Odds describe how much you may win later in the hand, in relation to the amount you need to bet or call at the time. Using Implied Odds is a way of figuring out whether calling or making a bet against your opponent is a good idea or not. Calculating Implied Odds in poker works in the same way as calculating Pot Odds, but Implied Odds take into consideration any future betting.
### What does Pot Odds in poker mean?
Pot Odds is the ratio of the current size of the pot to the cost of a call. For example, if there is \$4 in the pot and your opponent bets \$1, this will mean that you′ll have to pay one fifth, or 20% of the pot to stay in. If your chances of winning are less than 20%, it’s not a good move in the long term.
### What does Fold Equity mean?
Fold Equity refers to the equity you can expect to gain based on your opponent folding, so fold equity applies in cases where you are betting or raising. The formula to work out fold equity is:
• Fold equity = likelihood that opponent will fold x gain in equity if opponent folds.
### What is Implied Probabilty?
Implied Probability is an extension of Implied Odds. It is a conversion of traditional odds, in a ratio format, into a probability percentage.
### How do you calculate Pot Odds?
To calculate pot odds, count the chips in the pot and compare that with the amount of chips you must pay to stay in the hand. E.g. if there are 100 chips in the pot and you must pay 10 to call, your pot odds are 10 to 1.
To use this knowledge to your advantage, establish the ratio of cards in the deck that you don’t need vs. the cards that you do need. Then, compare this ratio with the pot odds. If the pot odds are longer, or bigger, than the card odds, it′s a good idea to call. | 2,083 | 8,323 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.765625 | 4 | CC-MAIN-2024-38 | latest | en | 0.921634 |
https://propulsion2016.com/distance-from-earth/how-far-from-earth-is-venus.html | 1,656,689,070,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103941562.52/warc/CC-MAIN-20220701125452-20220701155452-00246.warc.gz | 511,888,985 | 12,533 | How Far From Earth Is Venus?
Venus is approximately 38 million miles (nearly 61 million kilometers) away from the Earth when it is at its closest. Although the two planets are frequently in close proximity to one another, Mercury, the innermost planet, actually spends more time in close proximity to Earth than Venus.
How many years would it take to get to Venus?
109 days, or 3.5 months, is the shortest time a spaceship has taken to go from Earth to Venus. The voyage that took the longest time was 198 days, or 6.5 months. The majority of voyages take between 120 and 130 days, or around 4 months.
How far is Venus from Earth right now?
The distance between Venus and the Earth In this moment, the distance between Venus and Earth is 74,658,038 kilometers, which is comparable to 0.499058 Astronomical Units.
How many light years does it take to get to Venus from Earth?
As a result, Venus is only 0.0024 light years away from the planet Earth.
How long is a day on Venus?
The whole travel time from Earth to Mars takes between 150 and 300 days, depending on the speed with which the spacecraft is launched, the alignment of the Earth and Mars, and the length of the voyage the spacecraft must complete to reach its destination. It all boils down to how much gasoline you’re willing to burn to get there in the first place. More gasoline means less time on the road.
You might be interested: Question: How Far Above Earth Surface Will The Acceleration Of Gravity?
Can you walk on Venus?
Venus is a beautiful planet to walk around on. Because Venus is quite close in size to Earth in terms of surface area, walking on this planet would be very similar to walking on this planet in terms of sensation. Venus is the hottest planet in the Solar System because heat is trapped in its dense atmosphere as a result of the greenhouse effect, which causes the heat to be trapped.
What planet is 11 million miles from Earth?
Venus is the planet that is nearest to Earth (and also the one that is the most comparable in size). However, the distance between it and our planet is determined by the orbits of both bodies.
How hot is it on Venus?
Despite the fact that Mercury is closer to the Sun than any other planet in our solar system, it is the warmest planet in our solar system. It is possible to melt lead at the surface temperatures of Venus, which are around 900 degrees Fahrenheit (475 degrees Celsius). The land is reddish in hue, and it is dotted with mountains that have been violently compressed and thousands of enormous volcanoes.
What is the closest planet to Earth?
Venus is not the planet’s farthest distant neighbor. Calculations and simulations have confirmed that Mercury is, on average, the planet that is closest to the Earth—and to every other planet in the solar system as well.
How long would it take to get to Pluto?
After taking off from Earth in January 2006 at a record-breaking 36,400 mph (58,580 km/h), the \$720 million New Horizons spacecraft is now on its journey to Pluto. But even if the probe moved at a breakneck rate, it would take 9.5 years to reach Pluto, which was almost 3 billion miles (5 billion kilometers) away from Earth on the day of its approach.
You might be interested: Readers ask: How Far Across The Earth Can You See?
Do people age in space?
We all have a distinct way of measuring our experiences in terms of space and time. Due to the fact that space-time is not flat, but instead is curved, and it can be twisted by matter and energy, this is true. And for astronauts aboard the International Space Station, this implies that they will age only a smidgeon slower than folks on the ground. Because of the consequences of time dilation, this is the case.
Do planets get closer to the Sun?
Our planet was around 50,000 kilometers closer to the Sun approximately 4.5 billion years ago than it is today, and it will continue to grow farther distant from the Sun as the Sun continues to mature. During each and every orbit around our Sun, the planets grow ever less firmly connected to our Sun.
Releated
Often asked: How Far Is Next Sun From Earth?
The Earth’s closest approach to the sun, known as perihelion, occurs in early January and is around 91 million miles (146 million km) away from the sun, or just shy of one astronomical unit. Aphelion is the distance between Earth and the sun at which it is at its farthest distant. It arrives in early […]
Hey Google How Far Away Is The Sun From The Earth?
Science fiction writers have referred to our region of space as the “Goldilocks Zone” for the reason that it looks to be just suitable for life. As previously stated, the average distance between the Earth and the Sun is around 93 million miles (150 million kilometers). That’s equal to one AU. Contents1 How long would […] | 1,059 | 4,808 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2022-27 | latest | en | 0.955547 |
https://www.enotes.com/homework-help/a-survey-asks-if-the-husband-in-a-family-wants-2037852 | 1,586,221,320,000,000,000 | text/html | crawl-data/CC-MAIN-2020-16/segments/1585371662966.69/warc/CC-MAIN-20200406231617-20200407022117-00508.warc.gz | 909,147,613 | 11,347 | # A survey asks, "If the husband in a family wants children, but the wife decides that she does not want any children, is it all right for the wife to refuse to have children?" Of 743 subjects, 569 said yes. Find a 99% confidence interval for the population proportion who would say yes using a TI 83 calculator.
Eric Bizzell | Certified Educator
briefcaseTeacher (K-12)
calendarEducator since 2011
starTop subjects are Math, Science, and Business
A survey question posed to 743 people gets 569 positive answers. We are asked to find a 99% confidence interval of the true population proportion.
(1) In a TI-83/84 calculator:
Hit Stat -> Tests -> A (1-PropZInt)
x=569 (number of "successes")
n=743 (sample size)
C-Level: 99 (given in problem)
<Enter> (Calculate)
Output:
1-PropZInt (type of test)
(.7258,.80583) (interval you seek)
`hat(p)=.7658142665` (p-hat is the sample proportion...
(The entire section contains 206 words.) | 255 | 945 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.15625 | 3 | CC-MAIN-2020-16 | longest | en | 0.8709 |
https://cbsemathssolutions.in/page/9/ | 1,560,827,686,000,000,000 | text/html | crawl-data/CC-MAIN-2019-26/segments/1560627998605.33/warc/CC-MAIN-20190618023245-20190618045245-00122.warc.gz | 379,123,771 | 12,928 | Learn the important laws of exponents formulas. Exponents are also called as indices and also called as power. In exponents …
Learn about the alternate interior angles. When comes to meaning of these three words I think you know what is …
Learn the important properties of square. Everyone should know the square properties. In quadrilaterals this is the most important topic …
Learn about the perimeter of triangle and also learn the formula to find it. For any polygon the meaning of …
Learn the meaning and definition of sector of a circle. In circle, we have so many definitions to learn in …
Learn the tables from 21 to 30. Generally, we won’t learn 21 to 30 tables. We will learn the tables …
Did you learn the prime numbers from 100 to 200? I mean everyone knows prime numbers from 1 to 100. …
Learn and know about the corresponding angles. First, we will revise about an angle? The union of two rays with …
Do you know what are the irrational numbers and how to give the irrational numbers definition? Irrational numbers are one of …
Have you learnt the Pythagorean Theorem proof? Which is considered as an important theorem in mathematics. This theorem works only … | 261 | 1,174 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.0625 | 3 | CC-MAIN-2019-26 | latest | en | 0.931107 |
https://numbas.mathcentre.ac.uk/question/35021/praneetha-s-copy-sathasivampillai-s-copy-of-integration-calculating-the-area-under-a-curve/ | 1,722,987,991,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640523737.31/warc/CC-MAIN-20240806224232-20240807014232-00523.warc.gz | 345,565,409 | 220,455 | Try to answer the following questions here:
• What is this question used for?
• What does this question assess?
• What does the student have to do?
• How is the question randomised?
• Are there any implementation details that editors should be aware of?
### History
#### Checkpoint description
Describe what's changed since the last checkpoint.
#### Praneetha Singh6 years, 1 month ago
Created this as a copy of Kamila's copy of sathasivampillai's copy of Integration: Calculating the area under a curve.
Integration: Setting up the integral to calculate the area under a graph draft Lovkush Agarwal 13/02/2020 15:06
Integration: Calculating the area under a curve Ready to use Lovkush Agarwal 24/05/2023 05:38
Integration: Calculating the area under a curve. Needs integration by parts. draft Lovkush Agarwal 26/02/2020 09:06
Integration: Calculating the area under a curve. Need to re-write the numerator draft Lovkush Agarwal 23/04/2020 10:19
sathasivampillai's copy of Integration: Calculating the area under a curve draft sathasivampillai umakanthan 28/06/2018 01:08
sathasivampillai's copy of sathasivampillai's copy of Integration: Calculating the area under a curve draft sathasivampillai umakanthan 28/06/2018 01:19
Kamila's copy of sathasivampillai's copy of Integration: Calculating the area under a curve draft Kamila Yusufu 28/06/2018 05:17
Praneetha's copy sathasivampillai's copy of Integration: Calculating the area under a curve draft Praneetha Singh 28/06/2018 05:43
Integration: Calculating the area under a curve [L10] draft Abbi Mullins 24/07/2018 14:20
Graph2: Integration draft Joshua Calcutt 17/09/2020 00:22
Maria's copy of Integration: Setting up the integral to calculate the area under a graph draft Maria Aneiros 27/05/2019 07:20
Maria's copy of Integration: Calculating the area under a curve draft Maria Aneiros 27/05/2019 07:22
Maria's copy of Integration: Calculating the area under a curve. Needs integration by parts. draft Maria Aneiros 27/05/2019 07:21
Integration: Calculating the area under a curve draft Kevin Bohan 05/06/2019 11:34
Anna's copy of Maria's copy of Integration: Calculating the area under a curve draft Anna Strzelecka 11/07/2019 20:11
Anna's copy of Integration: Calculating the area under a curve draft Anna Strzelecka 12/07/2019 14:54
Definite integrals draft Anna Strzelecka 23/03/2020 15:18
Integration - Area under a curve 1 draft Kevin Bohan 19/09/2019 12:10
Musa's copy of 3 Definite integrals - 4 draft Musa Mammadov 27/06/2023 01:41
There are 53 other versions that do you not have access to.
Give any introductory information the student needs.
Name Type Generated Value
#### Error in variable testing condition
There's an error in the variable testing condition. Variable values can't be generated until it's fixed.
Error:
for seconds
Running for ...
No variables have been defined in this question.
This variable was automatically created because there's a reference to it somewhere in this question.
#### Warning
• When applying the function,
##### Suggestions
• Change the signature of parameter from to .
This variable is an HTML node. HTML nodes can not be relied upon to work correctly when resuming a session - for example, attached event callbacks will be lost, and mathematical notation will likely also break.
If this causes problems, try to create HTML nodes where you use them in content areas, instead of storing them in variables.
Describe what this variable represents, and list any assumptions made about its value.
### Generated value:
:
← Depends on:
→ Used by:
This variable doesn't seem to be used anywhere.
(sum of objective limits)
Name Limit
### Penalties
Name Limit
No parts have been defined in this question.
Select a part to edit.
This gap is not used in the parent part's prompt.
The student will not be able to enter an answer to this gap.
The correct answer is an equation. Use the accuracy tab to generate variable values satisfying this equation so it can be marked accurately.
#### Checking accuracy
Define the range of points over which the student's answer will be compared with the correct answer, and the method used to compare them.
#### Variable value generators
Give expressions which produce values for each of the variables in the expected answer. Leave blank to pick a random value from the range defined above, following the inferred type of the variable.
(this might be a )
#### String restrictions
(no maximum)
(The midpoint of the minimum and maximum accepted values)
(not displayed in columns)
(no maximum)
(not displayed in columns)
(no maximum)
For each choice, specify the number of marks to add or subtract when the student picks it.
For each choice, write 1 if the student should tick it, or 0 if they should leave it unticked.
You must set a maximum number of marks in order to use this marking method.
(no maximum)
(no maximum)
You must set a maximum number of marks in order to use this marking method.
For each combination of answer and choice, specify the number of marks to add or subtract when the student picks it.
For each combination of answer and choice, write 1 if the student should tick it, or 0 if they should leave it unticked.
Both choices and answers must be defined for this part.
(No limit)
(No limit)
(No limit)
(No limit)
Help with this part type
#### Test that the marking algorithm works
Check that the marking algorithm works with different sets of variables and student answers using the interface below.
Create unit tests to save expected results and to document how the algorithm should work.
There's an error which means the marking algorithm can't run:
This is an extension type part. There is no expected answer, and it's assumed that the marking depends on interactions within the part's prompt.
You can test this part's marking algorithm by interacting with the rendering of the part's prompt below.
Score:
#### Question variables
These variables are available within the marking algorithm.
Name Value
#### Marking parameters
These values are available as extra variables in the marking algorithm.
Name Value
#### Part settings
These values are available as entries in the settings variable.
Name Value
##### Warnings:
Alternative used:
These are the notes produced by this part's marking algorithm.
Note Value Feedback
Click on a note's name to show or hide it.
#### Unit tests
No unit tests have been defined. Enter an answer above, select one or more notes, and click the "Create a unit test" button.
The following tests check that the question is behaving as desired.
### This test has not been run yet This test produces the expected output This test does not produce the expected output
This test is not currently producing the expected result. Fix the marking algorithm to produce the expected results detailed below or, if this test is out of date, update the test to accept the current values.
One or more notes in this test are no longer defined. If these notes are no longer needed, you should delete this test.
Name Value
Note This note produces the expected output Messages No feedback messages. Warnings No warnings. Messages A difference in feedback messages does not cause this test to fail. No feedback messages. Warnings A difference in warnings does not cause this test to fail. No warnings.
This test has not yet been run.
When you need to change the way this part works beyond the available options, you can write JavaScript code to be executed at the times described below.
Run this script the built-in script.
This script runs after the built-in script.
To account for errors made by the student in earlier calculations, replace question variables with answers to earlier parts.
In order to create a variable replacement, you must define at least one variable and one other part.
The variable replacements you've chosen will cause the following variables to be regenerated each time the student submits an answer to this part:
These variables have some random elements, which means they're not guaranteed to have the same value each time the student submits an answer. You should define new variables to store the random elements, so that they remain the same each time this part is marked.
This part can't be reached by the student.
Add a "next part" reference to this part from another part.
None of the parts which can lead to this part are reachable either.
### Next part options
Define the list of parts that the student can visit after this one.
• #### Variable replacements
No variable replacements have been defined for this next part option.
Variable Value
### Previous parts
This part can follow on from:
This part doesn't follow on from any others.
### to
Choose a type for this new .
Give a worked solution to the whole question.
Select extensions to use in this question.
Define rulesets for simplification and display of mathematical expressions.
### Functions
Define functions to use in JME expressions.
Select a function to edit.
No functions have been defined in this question.
Name Type
of
Output type
## Built-in constants
Tick the built-in constants you wish to include in this question.
## Custom constants
You can define constants in terms of the built-in constants, even if they're disabled.
Names Value LaTeX
Add styling to the question's display and write a script to run when the question is created.
This script will run after the question's variable have been generated but before the HTML is attached.
Apply styling rules to the question's content.
Use this tab to check that this question works as expected.
There was an error which means the tests can't run:
Part Test Passed?
Hasn't run yet Running Passed Failed
• :
This question is not used in any exams. | 2,187 | 9,785 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.6875 | 3 | CC-MAIN-2024-33 | latest | en | 0.813753 |
https://id.scribd.com/document/169081840/Pseudovector-Wikipedia-The-Free-Encyclopedia | 1,563,843,849,000,000,000 | text/html | crawl-data/CC-MAIN-2019-30/segments/1563195528635.94/warc/CC-MAIN-20190723002417-20190723024417-00279.warc.gz | 428,536,346 | 73,359 | Anda di halaman 1dari 6
# Pseudovector
In physics and mathematics, a pseudovector (or axial vector) is a quantity that transforms like a vector under a proper rotation, but gains an additional sign flip under an improper rotation such as a reflection. Geometrically it is the opposite, of equal magnitude but in the opposite direction, of its mirror image. This is as opposed to a true or polar vector (more formally, a contravariant vector), which on reflection matches its mirror image. In three dimensions the pseudovector p is associated with the cross product of two polar vectors a and b:[2]
A loop of wire (black), carrying a current, creates a magnetic field (blue). If the position and current of the wire are reflected across the dotted line, the magnetic field it generates would not be reflected: Instead, it would be reflected and reversed. The position of the wire and its current are vectors, but the magnetic field is a pseudovector. [1]
The vector p calculated this way is a pseudovector. One example is the normal to an oriented plane. An oriented plane can be defined by two non-parallel vectors, a and b,[3] which can be said to span the plane. The vector a b is a normal to the plane (there are two normals, one on each side the right-hand rule will determine which), and is a pseudovector. This has consequences in computer graphics where it has to be considered when transforming surface normals.
A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and angular velocity. In mathematics pseudovectors are equivalent to three dimensional bivectors, from which the transformation rules of pseudovectors can be derived. More generally in n-dimensional geometric algebra pseudovectors are the elements of the algebra with dimension n 1, written n 1Rn . The label 'pseudo' can be further generalized to pseudoscalars and pseudotensors, both of which gain an extra sign flip under improper rotations compared to a true scalar or tensor.
Contents
1 Physical examples 2 Details 2.1 Behavior under addition, subtraction, scalar multiplication 2.2 Behavior under cross products 2.3 Examples 3 The right-hand rule 4 Geometric algebra 4.1 Transformations in three dimensions 4.2 Note on usage 5 Notes 6 General references 7 See also
Physical examples
Physical examples of pseudovectors include the magnetic field, torque, vorticity, and the angular momentum. Often, the distinction between vectors and pseudovectors is overlooked, but it becomes important in understanding and exploiting the effect of symmetry on the solution to physical systems. For example, consider the case of an electrical current loop in the z = 0 plane, which has a magnetic field at z = 0 that is oriented in the z direction. This system is symmetric (invariant) under mirror reflections through the plane (an improper rotation), so the magnetic field should be unchanged by the
reflection. But reflecting the magnetic field through that plane naively appears to change its sign if it is viewed as a vector field this contradiction is resolved by realizing that the mirror reflection of the field induces an extra sign flip because of its pseudovector nature, so the mirror flip in the end leaves the magnetic field unchanged as expected. As another example, consider the pseudovector angular momentum L = r p. Driving in a car, and looking forward, each of the wheels has an angular momentum vector pointing to the left. If the world is reflected in a mirror which switches the left and right side of the car, the "reflection" of this angular momentum "vector" (viewed as an ordinary vector) points to the right, but the actual angular momentum vector of the wheel still points to the left, corresponding to the extra minus sign in the reflection of a pseudovector. This reflects the fact that the wheels are still turning forward. In comparison, the behaviour of a regular vector, such as the position of the car, is quite different. To the extent that physical laws would be the same if the universe were reflected in a mirror (equivalently, invariant under parity), the sum of a vector and a pseudovector is not meaningful. However, the weak force, which governs beta decay, does depend on the chirality of the universe, and in this case pseudovectors and vectors are added.
Each wheel of a car driving away from an observer has an angular momentum pseudovector pointing left. The same is true for the mirror image of the car.
Details
See also: Covariance and contravariance of vectors and Euclidean vector The definition of a "vector" in physics (including both polar vectors and pseudovectors) is more specific than the mathematical definition of "vector" (namely, any element of an abstract vector space). Under the physics definition, a "vector" is required to have components that "transform" in a certain way under a proper rotation: In particular, if everything in the universe were rotated, the vector would rotate in exactly the same way. (The coordinate system is fixed in this discussion; in other words this is the perspective of active transformations.) Mathematically, if everything in the universe undergoes a rotation described by a rotation matrix R, so that a displacement vector x is transformed to x = Rx, then any "vector" v must be similarly transformed to v = Rv. This important requirement is what distinguishes a vector (which might be composed of, for example, the x , y, and z-components of velocity) from any other triplet of physical quantities (For example, the length, width, and height of a rectangular box cannot be considered the three components of a vector, since rotating the box does not appropriately transform these three components.) (In the language of differential geometry, this requirement is equivalent to defining a vector to be a tensor of contravariant rank one.) The discussion so far only relates to proper rotations, i.e. rotations about an axis. However, one can also consider improper rotations, i.e. a mirror-reflection possibly followed by a proper rotation. (One example of an improper rotation is inversion.) Suppose everything in the universe undergoes an improper rotation described by the rotation matrix R, so that a position vector x is transformed to x = Rx. If the vector v is a polar vector, it will be transformed to v = Rv. If it is a pseudovector, it will be transformed to v = - Rv. The transformation rules for polar vectors and pseudovectors can be compactly stated as (polar vector) (pseudovector) where the symbols are as described above, and the rotation matrix R can be either proper or improper. The symbol det denotes determinant; this formula works because the determinant of proper and improper rotation matrices are +1 and -1, respectively.
## Behavior under addition, subtraction, scalar multiplication
Suppose v1 and v2 are known pseudovectors, and v3 is defined to be their sum, v3=v1+v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to
So v3 is also a pseudovector. Similarly one can show that the difference between two pseudovectors is a pseudovector, that the sum or difference of two polar vectors is a polar vector, that multiplying a polar vector by any real number yields another polar vector, and that multiplying a pseudovector by any real number yields another pseudovector. On the other hand, suppose v1 is known to be a polar vector, v2 is known to be a pseudovector, and v3 is defined to be their sum, v3=v1+v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to
Therefore, v3 is neither a polar vector nor a pseudovector. For an improper rotation, v3 does not in general even keep the same magnitude: but .
If the magnitude of v3 were to describe a measurable physical quantity, that would mean that the laws of physics would not appear the same if the universe was viewed in a mirror. In fact, this is exactly what happens in the weak interaction: Certain radioactive decays treat "left" and "right" differently, a phenomenon which can be traced to the summation of a polar vector with a pseudovector in the underlying theory. (See parity violation.)
## Behavior under cross products
For a rotation matrix R, either proper or improper, the following mathematical equation is always true: , where v1 and v2 are any three-dimensional vectors. (This equation can be proven either through a geometric argument or through an algebraic calculation, and is well known.) Suppose v1 and v2 are known polar vectors, and v3 is defined to be their cross product, v3=v1v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to
Under inversion the two vectors change sign, but their cross product is invariant [black are the two original vectors, grey are the inverted vectors, and red is their mutual cross product].
So v3 is a pseudovector. Similarly, one can show: polar vector polar vector = pseudovector pseudovector pseudovector = pseudovector polar vector pseudovector = polar vector pseudovector polar vector = polar vector
Examples
From the definition, it is clear that a displacement vector is a polar vector. The velocity vector is a displacement vector (a polar vector) divided by time (a scalar), so is also a polar vector. Likewise, the momentum vector is the velocity vector (a polar vector) times mass (a scalar), so is a polar vector. Angular momentum is the cross product of a displacement (a polar vector) and momentum (a polar vector), and is therefore a pseudovector. Continuing this way, it is straightforward to classify any vector as either a pseudovector or polar vector.
## The right-hand rule
Above, pseudovectors have been discussed using active transformations. An alternate approach, more along the lines of passive transformations, is to keep the universe fixed, but switch "right-hand rule" with "left-hand rule" and vice-versa everywhere in physics, in particular in the definition of the cross product. Any polar vector (e.g., a translation vector) would be unchanged, but pseudovectors (e.g., the magnetic field vector at a point) would switch signs. Nevertheless, there would be no physical consequences, apart from in the parity-violating phenomena such as certain radioactive decays.[4]
Geometric algebra
In geometric algebra the basic elements are vectors, and these are used to build a hierarchy of elements using the definitions of products in this algebra. In particular, the algebra builds pseudovectors from vectors. The basic multiplication in the geometric algebra is the geometric product, denoted by simply juxtaposing two vectors as in ab. This product is expressed as:
where the leading term is the customary vector dot product and the second term is called the wedge product. Using the postulates of the algebra, all combinations of dot and wedge products can be evaluated. A terminology to describe the various combinations is provided. For example, a multivector is a summation of k -fold wedge products of various k -values. A k -fold wedge product also is referred to as a k -blade. In the present context the pseudovector is one of these combinations. This term is attached to a different mulitvector depending upon the dimensions of the space (that is, the number of linearly independent vectors in the space). In three dimensions, the most general 2-blade or bivector can be expressed as a single wedge product and is a pseudovector.[5] In four dimensions, however, the pseudovectors are trivectors.[6] In general, it is a (n - 1)-blade, where n is the dimension of the space and algebra.[7] An n-dimensional space has n vectors and also n pseudovectors. Each pseudovector is formed from the outer (wedge) product of all but one of the n vectors. For instance, in four dimensions where the vectors are: {e 1, e 2, e 3, e 4}, the pseudovectors can be written as: {e 234, e 134, e 124, e 123}.
## Transformations in three dimensions
The transformation properties of the pseudovector in three dimensions has been compared to that of the vector cross product by Baylis.[8] He says: "The terms axial vector and pseudovector are often treated as synonymous, but it is quite useful to be able to distinguish a bivector from its dual." To paraphrase Baylis: Given two polar vectors (that is, true vectors) a and b in three dimensions, the cross product composed from a and b is the vector normal to their plane given by c = a b. Given a set of right-handed orthonormal basis vectors { e }, the cross product is expressed in terms of its components as:
where superscripts label vector components. On the other hand, the plane of the two vectors is represented by the exterior product or wedge product, denoted by a b. In this context of geometric algebra, this bivector is called a pseudovector, and is the dual of the cross product.[9] The dual of e 1 is introduced as e 23 e 2e 3 = e 2 e 3, and so forth. That is, the dual of e 1 is the subspace perpendicular to e 1, namely the subspace spanned by e 2 and e 3. With this understanding,[10]
For details see Hodge dual. Comparison shows that the cross product and wedge product are related by:
## where i = e 1 e 2 e 3 is called the unit pseudoscalar.[11][12] It has the property:[13]
Using the above relations, it is seen that if the vectors a and b are inverted by changing the signs of their components while leaving the basis vectors fixed, both the pseudovector and the cross product are invariant. On the other hand, if the components are fixed and the basis vectors e are inverted, then the pseudovector is invariant, but the cross product changes sign. This behavior of cross products is consistent with their definition as vector-like elements that change sign under transformation from a right-handed to a left-handed coordinate system, unlike polar vectors.
Note on usage
As an aside, it may be noted that not all authors in the field of geometric algebra use the term pseudovector, and some authors follow the terminology that does not distinguish between the pseudovector and the cross product.[14] However, because the cross product does not generalize beyond three dimensions,[15] the notion of pseudovector based upon the cross product also cannot be extended to higher dimensions. The pseudovector as the (n1)-blade of an n-dimensional space is not so restricted. Another important note is that pseudovectors, despite their name, are "vectors" in the common mathematical sense, i.e. elements of a vector space. The idea that "a pseudovector is different from a vector" is only true with a different and more specific definition of the term "vector" as discussed above.
Notes | 3,164 | 14,644 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.875 | 4 | CC-MAIN-2019-30 | latest | en | 0.928585 |
https://calcforme.com/electricity-cost | 1,702,324,858,000,000,000 | text/html | crawl-data/CC-MAIN-2023-50/segments/1700679516047.98/warc/CC-MAIN-20231211174901-20231211204901-00745.warc.gz | 181,104,426 | 5,040 | # Electricity Cost Calculator
## Calculate the electric cost of items or machines you run.
h/day
\$
### Electrical Cost of Many Items
\$
Item - optional
Power Consumption
Usage time
h/day
### Electricity Cost
• Calculate the electric cost of an item, by adding item power consumption, usage time per day, and the tariff per KWh (means one Kilowatt-hour "KWh" cost).
• Formulas:
Converting watts and kilowatts
Kilowatts to Watts formula:
Watts (W) = KW x 1000
1 KW = 1 x 1000 = 1000 Watts (W)
Watts to Kilowatts formula:
Kilowatts (KW) = W ÷ 1000
100 W = 100 ÷ 1000 = 0.1 Kilowatts (KW)
KWh consumption formula:
KWh consumption (energy) = Power x Time
Example:
Power = 2 KW
Time = 3 hours
kWh = 2 x 3 = 6 KWh
KWh Cost formula:
Cost = KWh used x Tariff per KWh
Example:
Power = 2 KW
Time = 3 hours
So the kWh = 2 x 3 = 6 KWh
Tariff = \$0.10 per KWh
Cost = 6 KWh x \$0.10/KWh = \$0.60 | 300 | 895 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.859375 | 4 | CC-MAIN-2023-50 | latest | en | 0.680092 |
https://www.askiitians.com/forums/Algebra/22/55958/permutation-and-combinations.htm | 1,660,961,000,000,000,000 | text/html | crawl-data/CC-MAIN-2022-33/segments/1659882573876.92/warc/CC-MAIN-20220820012448-20220820042448-00751.warc.gz | 551,890,980 | 34,792 | # id a,b,c are natural numbers and are in arithmetic progressions and a+b+c=21.then find the possible values for a,b,c
58 Points
9 years ago
sincea,b,c are in ap then b=a+c/2
so putting this we get
a+c=2b
so 2b+b=21 ,this means 3b=21
b=7
simple solve for a and c and u will get 6 and 8.
all the best!!!!!!!
souvik sonu roy
34 Points
9 years ago
a+c=2b
3b=21
b=7
a+c=14
a can be 1,2,3,4,5,6
c can be13,12,11,10,9,8
souvik sonu roy
34 Points
9 years ago
a+b+c=21
a+c=2b
3b=21
b=7
a can be 1,2,3,4,5,6
c can be 13,12,11,10,9,8
Manas Satish Bedmutha
22 Points
9 years ago
let a=b-d and c=b+d. Thus b=7. To find a+c=14, solutions, consider a=1 to 14 and crrespondingly c= 13 to 0. but 0 is not natural. So consider a= 1 to 13. Thus ther are 13 solutions, viz. (1,7,13)(2,7,12)(3,7,11)(4,7,10)(5,7,9)(6,7,8)(7,7,7)(8,7,6)(9,7,5)(10,7,4)(11,7,3)(12,7,2)(13,7,1) | 396 | 877 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2022-33 | latest | en | 0.646441 |
https://mathsolver.microsoft.com/en/solve-problem/0.5%20(%205%20-%207%20x%20)%20%3D%208%20-%20(%204%20x%20%2B%206%20) | 1,695,409,775,000,000,000 | text/html | crawl-data/CC-MAIN-2023-40/segments/1695233506421.14/warc/CC-MAIN-20230922170343-20230922200343-00388.warc.gz | 423,496,745 | 163,086 | 0, point, 5, left parenthesis, 5, minus, 7, x, right parenthesis, equals, 8, minus, left parenthesis, 4, x, plus, 6, right parenthesis
Solve for x
Graph
## Share
2.5-3.5x=8-\left(4x+6\right)
Use the distributive property to multiply 0.5 by 5-7x.
2.5-3.5x=8-4x-6
To find the opposite of 4x+6, find the opposite of each term.
2.5-3.5x=2-4x
Subtract 6 from 8 to get 2.
2.5-3.5x+4x=2 | 168 | 381 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.0625 | 4 | CC-MAIN-2023-40 | latest | en | 0.62848 |
https://www.makeoverfitness.com/daily-calories/3363-harris-benedict-formula | 1,722,820,239,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640417235.15/warc/CC-MAIN-20240804230158-20240805020158-00399.warc.gz | 696,174,040 | 6,262 | # Harris Benedict Formula Example
Here's a Harris Benedict Formula example of how to determine the amount of calories (BMR rate) you burn while resting.
WARNING: Use this equation only as an estimate.
Example 1
Susan is 45 years old; she weighs 245 pounds and is 5 foot 5 inches tall. She works a job where she sits most of the day. She never exercises. Based on these statistics, how many calories is Susan daily?
Step 1
Plug Susan's weight (245lbs), height (5 foot 5 inches), and age (45 yrs) into the formula below for women to find her BMR.
English BMR Formula Women: BMR = 655 + (4.35 x weight in pounds) + (4.7 x height in inches) - (4.7 x age in years)Men: BMR = 66 + (6.23 x weight in pounds) + (12.7 x height in inches) - (6.8 x age in year)
BMR=655 + (4.35 x weight in pounds) + (4.7 x height in inches) - (4.7 x age in years).
BMR=655 + (4.35 x 245) + (4.7 x 65) - (4.7 x 45)
BMR=655 + 1065.75 + 305.5 - 211.5
BMR= 1814.75
Step 2
Determine Susan's activity level using the chart below
Harris Benedict Formula To determine your total daily calorie needs, multiply your BMR by the appropriate activity factor, as follows: * If you are sedentary (little or no exercise): Calorie-Calculation = BMR x 1.2 * If you are lightly active (light exercise/sports 1-3 days/week): Calorie-Calculation = BMR x 1.375 * If you are moderately active (moderate exercise/sports 3-5 days/week): Calorie-Calculation = BMR x 1.55 * If you are very active (hard exercise/sports 6-7 days a week): Calorie-Calculation = BMR x 1.725 * If you are extra active (very hard exercise/sports & physical job or 2x training): Calorie-Calculation = BMR x 1.9
Step 3
Plug numbers into the formula
Since Susan doesn't exercise and inactive she is sedentary according the chart above. Based on that information, multiply her BMR X 1.2
Plug in the appropriate numbers
Susan's calories burned while at rest = SUSAN'S BMR X 1.2
Susan's calories burned while at rest = 1814.75 x 1.2 = 2177.7
Susan burns approximately 2177.7 calories a day at rest | 627 | 2,056 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.5 | 4 | CC-MAIN-2024-33 | latest | en | 0.837827 |
https://nrich.maths.org/public/topic.php?code=12&cl=2&cldcmpid=2793 | 1,571,089,576,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986655310.17/warc/CC-MAIN-20191014200522-20191014224022-00269.warc.gz | 628,077,930 | 9,501 | # Search by Topic
#### Resources tagged with Factors and multiples similar to Rabbit Run:
Filter by: Content type:
Age range:
Challenge level:
### There are 143 results
Broad Topics > Numbers and the Number System > Factors and multiples
### Multiples Grid
##### Age 7 to 11 Challenge Level:
What do the numbers shaded in blue on this hundred square have in common? What do you notice about the pink numbers? How about the shaded numbers in the other squares?
### Got it for Two
##### Age 7 to 14 Challenge Level:
Got It game for an adult and child. How can you play so that you know you will always win?
### Have You Got It?
##### Age 11 to 14 Challenge Level:
Can you explain the strategy for winning this game with any target?
### Got It
##### Age 7 to 14 Challenge Level:
A game for two people, or play online. Given a target number, say 23, and a range of numbers to choose from, say 1-4, players take it in turns to add to the running total to hit their target.
### The Moons of Vuvv
##### Age 7 to 11 Challenge Level:
The planet of Vuvv has seven moons. Can you work out how long it is between each super-eclipse?
### Tiling
##### Age 7 to 11 Challenge Level:
An investigation that gives you the opportunity to make and justify predictions.
### It Figures
##### Age 7 to 11 Challenge Level:
Suppose we allow ourselves to use three numbers less than 10 and multiply them together. How many different products can you find? How do you know you've got them all?
### Seven Flipped
##### Age 7 to 11 Challenge Level:
Investigate the smallest number of moves it takes to turn these mats upside-down if you can only turn exactly three at a time.
### Factor Lines
##### Age 7 to 14 Challenge Level:
Arrange the four number cards on the grid, according to the rules, to make a diagonal, vertical or horizontal line.
##### Age 7 to 11 Challenge Level:
If you have only four weights, where could you place them in order to balance this equaliser?
### Multiplication Squares
##### Age 7 to 11 Challenge Level:
Can you work out the arrangement of the digits in the square so that the given products are correct? The numbers 1 - 9 may be used once and once only.
### Odds and Threes
##### Age 7 to 11 Challenge Level:
A game for 2 people using a pack of cards Turn over 2 cards and try to make an odd number or a multiple of 3.
### Number Tracks
##### Age 7 to 11 Challenge Level:
Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see?
### Multiplication Series: Number Arrays
##### Age 5 to 11
This article for teachers describes how number arrays can be a useful representation for many number concepts.
### Money Measure
##### Age 7 to 11 Challenge Level:
How can you use just one weighing to find out which box contains the lighter ten coins out of the ten boxes?
### Path to the Stars
##### Age 7 to 11 Challenge Level:
Is it possible to draw a 5-pointed star without taking your pencil off the paper? Is it possible to draw a 6-pointed star in the same way without taking your pen off?
### Down to Nothing
##### Age 7 to 11 Challenge Level:
A game for 2 or more people. Starting with 100, subratct a number from 1 to 9 from the total. You score for making an odd number, a number ending in 0 or a multiple of 6.
### What's in the Box?
##### Age 7 to 11 Challenge Level:
This big box multiplies anything that goes inside it by the same number. If you know the numbers that come out, what multiplication might be going on in the box?
### Tom's Number
##### Age 7 to 11 Challenge Level:
Work out Tom's number from the answers he gives his friend. He will only answer 'yes' or 'no'.
### Colour Wheels
##### Age 7 to 11 Challenge Level:
Imagine a wheel with different markings painted on it at regular intervals. Can you predict the colour of the 18th mark? The 100th mark?
### Flashing Lights
##### Age 7 to 11 Challenge Level:
Norrie sees two lights flash at the same time, then one of them flashes every 4th second, and the other flashes every 5th second. How many times do they flash together during a whole minute?
### Sweets in a Box
##### Age 7 to 11 Challenge Level:
How many different shaped boxes can you design for 36 sweets in one layer? Can you arrange the sweets so that no sweets of the same colour are next to each other in any direction?
### Multiplication Square Jigsaw
##### Age 7 to 11 Challenge Level:
Can you complete this jigsaw of the multiplication square?
### Round and Round the Circle
##### Age 7 to 11 Challenge Level:
What happens if you join every second point on this circle? How about every third point? Try with different steps and see if you can predict what will happen.
### American Billions
##### Age 11 to 14 Challenge Level:
Play the divisibility game to create numbers in which the first two digits make a number divisible by 2, the first three digits make a number divisible by 3...
### Mystery Matrix
##### Age 7 to 11 Challenge Level:
Can you fill in this table square? The numbers 2 -12 were used to generate it with just one number used twice.
### Fractions in a Box
##### Age 7 to 11 Challenge Level:
The discs for this game are kept in a flat square box with a square hole for each. Use the information to find out how many discs of each colour there are in the box.
### Divide it Out
##### Age 7 to 11 Challenge Level:
What is the lowest number which always leaves a remainder of 1 when divided by each of the numbers from 2 to 10?
### Ducking and Dividing
##### Age 7 to 11 Challenge Level:
Your vessel, the Starship Diophantus, has become damaged in deep space. Can you use your knowledge of times tables and some lightning reflexes to survive?
### Factor-multiple Chains
##### Age 7 to 11 Challenge Level:
Can you see how these factor-multiple chains work? Find the chain which contains the smallest possible numbers. How about the largest possible numbers?
### What Is Ziffle?
##### Age 7 to 11 Challenge Level:
Can you work out what a ziffle is on the planet Zargon?
### A First Product Sudoku
##### Age 11 to 14 Challenge Level:
Given the products of adjacent cells, can you complete this Sudoku?
### Ben's Game
##### Age 11 to 14 Challenge Level:
Ben passed a third of his counters to Jack, Jack passed a quarter of his counters to Emma and Emma passed a fifth of her counters to Ben. After this they all had the same number of counters.
### Surprising Split
##### Age 7 to 11 Challenge Level:
Does this 'trick' for calculating multiples of 11 always work? Why or why not?
### Music to My Ears
##### Age 7 to 11 Challenge Level:
Can you predict when you'll be clapping and when you'll be clicking if you start this rhythm? How about when a friend begins a new rhythm at the same time?
### Give Me Four Clues
##### Age 7 to 11 Challenge Level:
Four of these clues are needed to find the chosen number on this grid and four are true but do nothing to help in finding the number. Can you sort out the clues and find the number?
### Factors and Multiples Game for Two
##### Age 7 to 14 Challenge Level:
Factors and Multiples game for an adult and child. How can you make sure you win this game?
### A Square Deal
##### Age 7 to 11 Challenge Level:
Complete the magic square using the numbers 1 to 25 once each. Each row, column and diagonal adds up to 65.
### Multiply Multiples 3
##### Age 7 to 11 Challenge Level:
Have a go at balancing this equation. Can you find different ways of doing it?
### Multiply Multiples 2
##### Age 7 to 11 Challenge Level:
Can you work out some different ways to balance this equation?
### Multiply Multiples 1
##### Age 7 to 11 Challenge Level:
Can you complete this calculation by filling in the missing numbers? In how many different ways can you do it?
### Curious Number
##### Age 7 to 11 Challenge Level:
Can you order the digits from 1-3 to make a number which is divisible by 3 so when the last digit is removed it becomes a 2-figure number divisible by 2, and so on?
### Three Dice
##### Age 7 to 11 Challenge Level:
Investigate the sum of the numbers on the top and bottom faces of a line of three dice. What do you notice?
### Light the Lights Again
##### Age 7 to 11 Challenge Level:
Each light in this interactivity turns on according to a rule. What happens when you enter different numbers? Can you find the smallest number that lights up all four lights?
### What Two ...?
##### Age 7 to 11 Short Challenge Level:
56 406 is the product of two consecutive numbers. What are these two numbers?
### Times Tables Shifts
##### Age 7 to 11 Challenge Level:
In this activity, the computer chooses a times table and shifts it. Can you work out the table and the shift each time?
### Venn Diagrams
##### Age 5 to 11 Challenge Level:
Use the interactivities to complete these Venn diagrams.
### Crossings
##### Age 7 to 11 Challenge Level:
In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest?
### Product Sudoku
##### Age 11 to 14 Challenge Level:
The clues for this Sudoku are the product of the numbers in adjacent squares.
### How Old Are the Children?
##### Age 11 to 14 Challenge Level:
A student in a maths class was trying to get some information from her teacher. She was given some clues and then the teacher ended by saying, "Well, how old are they?" | 2,223 | 9,485 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.09375 | 4 | CC-MAIN-2019-43 | latest | en | 0.87451 |
http://www.ck12.org/arithmetic/Decimals-as-Mixed-Numbers/lesson/Decimals-as-Mixed-Numbers/ | 1,448,468,720,000,000,000 | text/html | crawl-data/CC-MAIN-2015-48/segments/1448398445219.14/warc/CC-MAIN-20151124205405-00239-ip-10-71-132-137.ec2.internal.warc.gz | 346,485,134 | 33,027 | <img src="https://d5nxst8fruw4z.cloudfront.net/atrk.gif?account=iA1Pi1a8Dy00ym" style="display:none" height="1" width="1" alt="" />
# Decimals as Mixed Numbers
## Whole number and fraction with denominator of 10, 100 or 1000
0%
Progress
Practice Decimals as Mixed Numbers
Progress
0%
Decimals as Mixed Numbers
Credit: Mike Mozart
Source: https://www.flickr.com/photos/jeepersmedia/15905877477/
Henry is working on building a dog house. He needs to get lumber that is 6 inches wide and 1.5 inches thick. He goes to the hardware store and sees that they have 6 by \begin{align*}1\frac{1}{4}\end{align*}, 6 by \begin{align*}1 \frac{1}{2}\end{align*}, and 6 by \begin{align*}\frac{3}{4}\end{align*}. Which one does Henry need?
In this concept, you will learn to convert decimals to mixed numbers.
### Guidance
Some decimal numbers represent both a part and a whole. These decimal numbers can be written as mixed numbers. The decimal number must have both a whole and a part to be written as a mixed number. The mixed number and the decimal are equal because they both have the same value.
Here is a decimal number.
\begin{align*}4.5\end{align*}
Let’s write this decimal in a place value chart.
Tens Ones Decimal Point Tenths Hundredths Thousandths Ten-Thousandths 4 . 5
This decimal number has 4 ones and 5 tenths. The 4 represents the wholes. The 5 tenths represents the fraction. The five is the numerator and the tenths is the denominator.
Next, check and see if the fraction can be simplified. In this case, five-tenths can be simplified to one-half.
\begin{align*}4.5\end{align*} can be written as \begin{align*}4 \frac{1}{2}\end{align*}.
A decimal value can only be expressed one way. However, many fractions can be written to express the same value.
\begin{align*}0.75\end{align*} can be written as \begin{align*}\frac{75}{100}\end{align*}. You can make an equivalent fraction that has the same value.
Simplify \begin{align*}\frac{75}{100}\end{align*}.
You can keep on creating equivalent fractions that have the same value as \begin{align*}0.75\end{align*}.
Finding equivalent fractions for mixed numbers is similar. The whole number stays the same, but the fraction can vary.
Here is a decimal number.
\begin{align*}4.56\end{align*}
Convert the decimal number to a mixed number and find equivalent fractions.
First, convert the decimal to a mixed number. \begin{align*}4.56\end{align*} is read as four and fifty-six hundredths, the four is the whole number, the fifty-six is the numerator, and the denominator is the hundredths.
Then, simplify the fraction part of this mixed number to get another mixed number that is equivalent to the one above. The greatest common factor of 56 and 100 is 4.
### Guided Practice
Convert the following decimal to a mixed number in simplest form.
\begin{align*}6.55\end{align*}
First, convert the decimal to a mixed number. \begin{align*}6.55\end{align*} is read as six and fifty-five hundredths. 6 is the whole number, 55 is the numerator, and 100 is the denominator.
Then, write the fraction in simplest form. The GCF of 55 and 100 is 5.
\begin{align*}6.55\end{align*} is written as \begin{align*} 6 \frac{11}{20}\end{align*} in simplest form.
### Examples
Convert each decimal to a mixed number in simplest form.
#### Example 1
\begin{align*}7.8\end{align*}
First, convert the decimal to a mixed number. \begin{align*}7.8\end{align*} is 7 and 8 tenths.
Then, write the fraction in simplest form. The GCF of 8 and 10 is 2.
\begin{align*}7.8\end{align*} is written as \begin{align*}7 \frac{4}{5}\end{align*} in simplest form.
#### Example 2
\begin{align*}4.45\end{align*}
First, convert the decimal to a mixed number. \begin{align*}4.45\end{align*} is 4 and 45 hundredths.
Then, write the fraction in simplest form. The GCF of 45 and 100 is 5.
\begin{align*}4.45\end{align*} is written as \begin{align*}4 \frac{9}{20}\end{align*} in simplest form.
#### Example 3
\begin{align*}2.25\end{align*}
First, convert the decimal to a mixed number. \begin{align*}2.25\end{align*} is 2 and 25 hundredths.
Then, write the fraction in simplest form. The GCF of 25 and 100 is 25.
\begin{align*}2.25\end{align*} is written as \begin{align*}2 \frac{1}{4}\end{align*} in simplest form.
Credit: Cello Pics
Source: https://www.flickr.com/photos/mtip/4848641881/
Remember Henry building a dog house?
He needs lumber that is 6 inches wide and \begin{align*}1.5\end{align*} inches thick. Convert \begin{align*}1.5\end{align*} inches into a fraction to find the lumber he needs.
First, convert the decimal to a fraction. \begin{align*}1.5\end{align*} is 1 and 5 tenths.
Then, write the fraction in simplest form. The GCF of 5 and 10 is 5.
Henry needs to by the 6 by \begin{align*}1 \frac{1}{2}\end{align*} inch pieces of wood.
### Explore More
Convert each decimal to a mixed number in simplest form.
1. 3.5
2. 2.4
3. 13.2
4. 25.6
5. 3.45
6. 7.17
7. 18.18
8. 9.20
9. 7.65
10. 13.11
11. 7.25
12. 9.75
13. 10.10
14. 4.33
15. 8.22
### Vocabulary Language: English
Decimal
Decimal
In common use, a decimal refers to part of a whole number. The numbers to the left of a decimal point represent whole numbers, and each number to the right of a decimal point represents a fractional part of a power of one-tenth. For instance: The decimal value 1.24 indicates 1 whole unit, 2 tenths, and 4 hundredths (commonly described as 24 hundredths).
fraction
fraction
A fraction is a part of a whole. A fraction is written mathematically as one value on top of another, separated by a fraction bar. It is also called a rational number.
Mixed Number
Mixed Number
A mixed number is a number made up of a whole number and a fraction, such as $4\frac{3}{5}$. | 1,738 | 5,724 | {"found_math": true, "script_math_tex": 32, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0} | 4.6875 | 5 | CC-MAIN-2015-48 | longest | en | 0.836959 |
http://www.caclubindia.com/experts/calculation-of-esi-pf-271885.asp | 1,477,254,298,000,000,000 | text/html | crawl-data/CC-MAIN-2016-44/segments/1476988719416.57/warc/CC-MAIN-20161020183839-00277-ip-10-171-6-4.ec2.internal.warc.gz | 360,452,761 | 19,396 | # Calculation of ESI & PF
This query is : Open
Querist : Anonymous (Querist) 06 November 2009
Dear Sir
I want to know Calculation and Accounting Entries of PF and ESI
Thanking You:-
Ram Avtar Singh (Expert)
07 November 2009
PF is deducted on Basic salary i.e Basic + DA is Rs. 15000/- or less above that PF is not deducted.
1) Employee – 12 % (of Basic + DA & Food concession allowance & retaining allowance, if any)
2) Employer – 13.61 % (of Basic + DA & Food concession allowance & retaining allowance, if any)
[ 13.61 % = 3.67 % PF + 8.33 % Pension Scheme + 1.10 % Admin. Charges of PF + 0.5 % EDLI + 0.01 % Admin Charges of EDLI ]
EDLI - Employee deposit link insurance
The maximum ceiling limit of PF - Rs.15000/-from 01/09/2014
If the basic + DA exceeds 15000/- than the contributions is optional. Some company may have their own company policies.
Provident fund is calculated towards the employers is 13.61%
1. Employers Contribution
2.EPF A/c No.1 - 3.67%
3.EPF - Admn Charges - 1.1%
4.Pension Fund A/c No.10 - 8.33%
5.EDLI A/c No.21 - 0.5%
6.EDLI - Admn Charges - 0.01%
ESIC calculation:
-------------------
In this ESIC, it includes the medical benefit both for the employee and employer.
It has been calculated on the basic of gross pay per month and maximum limit is upto Rs.15000/- p.m
Employee side - 1.75% and Employer side - 4.75%.
So if the gross of an employee is 8000/- p.m his contribution would be 8000*1.75% = 140/-
Employer 8000*4.75% = 380/-
Therefore Net pay = Gross pay - Total deductions
1. Those who are getting 15000/- gross per month will not be applicable under ESIC act.
2. 20 eligible employees to get registered in ESIC
3. Eligible employees means those who are getting gross pay upto 15000/- or less per month.
Apart from that there is a tax deduction., it includes the Income & professional tax.
CTC means cost to the company.i.e .what are all the expenses incurred by the Company for any of its employee for a particular period(monthly/yearly)
gross pay + employers pf+employers ESI + bonus = CTC
i.e THE SALARY PAYABLE AND OTHER STATUTORY BENIFTS PAYABLE BY COMPANY.
CTC
-----
CTC is cost to company and the components are
Basic
+HRA
+CONVEYANCE
+MOBILE REIMBURSHMENT
+MEDICAL reimburshment
+All allowances
+LTA
+employer cotri of PF
+Employer Cotri towards ESI
+Total variable incentives
+Perks & benefits
+ insurance Premium (in case of Group insurance)
Gratuity calculation
t is been deposited @ 4.81% of Basic per month..
After completing 5 years of service one may claim Gratuity at the time of separation from the organisation and it is been paid @ 15 days of salary for per year of service...
Like for 6 years of experience one's gratuity will be calculated with this formula-
EPFO has cut the administrative fee charged from employers effective from 1st January 2015.
Existing Rate
New Rate
1.10 % of Total EPF Salary
0.85 % of Total EPF Salary
Minimum Rs 5 in case of Non Contributory Member
Minimum Rs 75 Per Month in case of non-functional establishment having no contributory member
Minimum Rs 500 for Contributory Members
Existing Rate
New Rate
0.01 % of Total EDLI Salary
0.01 % of Total EDLI Salary
Minimum Rs 2 in case of Non Contributory Member
Minimum Rs 25 Per Month in case of non-functional establishment having no contributory member
Minimum Rs 200 for Contributory Members
Ki\$hor B (Expert)
19 July 2012
bookmarked!
satpal (Expert)
14 September 2012
Sh. Ram Avtar Ji Thanks for such a nice information ,
However I would like one correction in your record .i.e .
Max. Limit for ESI is Rs. 15000.00 now instead of Rs. 10000.00
Over all I am fully agreed with the information provided by you.
Thanks & Regards
CA PRAVEEN SINGH (Expert)
20 November 2012
Agree with above..
CA AYUSH AGRAWAL (Expert)
08 December 2012
wow...what a info by ramavatarji.......
PANKAJ KUMAR (Expert)
06 March 2014
Nice information provided
keep it on ....!
Anbuselvam (Expert)
09 September 2015
From 01/09/2014, If Basic exceeds 15000, then PF contribution is optional. In this case, whether EDLI benefit will be applicable for those who we are paying PF (optional) on more than 15000 basic+DA.
Eg: One employee basic pay is 25,000 and joined in Dec 2014, company is deducting PF 3,000 on monthly basis. Will he gets EDLI benefit?
Anbuselvam (Expert)
09 September 2015
From 01/09/2014, If Basic exceeds 15000, then PF contribution is optional. In this case, whether EDLI benefit will be applicable for those who we are paying PF (optional) on more than 15000 basic+DA.
Eg: One employee basic pay is 25,000 and joined in Dec 2014, company is deducting PF 3,000 on monthly basis. Will he gets EDLI benefit?
You need to be the querist or approved CAclub expert to take part in this query .
### Similar Resolved Queries :
× Online Coaching My Enrolled Courses
Articles Forum News Experts Exams Share Files Income Tax Accounts Career Corporate Law Service Tax Video Judgements Rewards Top Members Events Albums Find Friends Featured Feed Scorecard Bookmarks Mock Test Poll Notification Knowledge Finder Coaching Institutes Trainee Corner Jobs | 1,378 | 5,130 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2016-44 | longest | en | 0.890635 |
https://nl.mathworks.com/matlabcentral/cody/problems/9-who-has-the-most-change/solutions/194231 | 1,610,813,148,000,000,000 | text/html | crawl-data/CC-MAIN-2021-04/segments/1610703506697.14/warc/CC-MAIN-20210116135004-20210116165004-00278.warc.gz | 475,361,007 | 17,179 | Cody
# Problem 9. Who Has the Most Change?
Solution 194231
Submitted on 20 Jan 2013 by Vieniava
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
%% a = [1 2 1 15]; b = 1; assert(isequal(most_change(a),b))
i = 1
2 Pass
%% a = [ 1 2 1 15; 0 8 5 9]; b = 2; assert(isequal(most_change(a),b))
i = 2
3 Pass
%% a = [ 1 22 1 15; 12 3 13 7; 10 8 23 99]; b = 3; assert(isequal(most_change(a),b))
i = 3
4 Pass
%% a = [ 1 0 0 0; 0 0 0 24]; b = 1; assert(isequal(most_change(a),b))
i = 1
5 Pass
%% a = [ 0 1 2 1; 0 2 1 1]; c = 1; assert(isequal(most_change(a),c))
i = 1
6 Pass
%% % There is a lot of confusion about this problem. Watch this. a = [0 1 0 0; 0 0 1 0]; c = 2; assert(isequal(most_change(a),c)) % Now go back and read the problem description carefully.
i = 2
7 Pass
%% a = [ 2 1 1 1; 1 2 1 1; 1 1 2 1; 1 1 1 2; 4 0 0 0]; c = 5; assert(isequal(most_change(a),c))
i = 5
### Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | 470 | 1,141 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.296875 | 3 | CC-MAIN-2021-04 | latest | en | 0.676748 |
http://perplexus.info/show.php?pid=9525&cid=54591 | 1,550,584,141,000,000,000 | text/html | crawl-data/CC-MAIN-2019-09/segments/1550247490107.12/warc/CC-MAIN-20190219122312-20190219144312-00230.warc.gz | 198,028,823 | 4,377 | All about flooble | fun stuff | Get a free chatterbox | Free JavaScript | Avatars
perplexus dot info
Possible or not? (Posted on 2015-01-17)
Prove or disprove the following:
For any integer number N there exists at least one integer number M, such that the decimal presentation of M*N needs only two distinct digits.
Comments: ( Back to comment list | You must be logged in to post comments.)
Warming up Comment 6 of 6 |
It helps to have:
N = n0 + 10n1 + 100n2 + ...
M = m0 + 10m1 + 100m2 + ...
where the undercase ms and ns are integers between 0 and 9. Then, the last two digits of U = NM are given by
u0 = n0m0 mod 10 and
u1 = n0m1 + n1m0 + floor(n0m0/10) mod 10
It is good practice to look into the circumstances under which m0 and m1 can be chosen so that u1 = u0. This implies:
m1n0 = n0m0 - m0n1 - floor(n0m0/10) mod 10
Now, if n0 is relatively prime to 10 (n0 = 1,3, or 7) then we have the fortuitous circumstance that, by varying m1, we can reach all possible values mod 10, so we are also completely free in how we pick m0..
Posted by FrankM on 2015-02-14 13:15:51
Search: Search body:
Forums (1) | 357 | 1,123 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2019-09 | latest | en | 0.842798 |
http://mathhelpforum.com/trigonometry/85219-help-trigonometric-problem.html | 1,481,418,359,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698543614.1/warc/CC-MAIN-20161202170903-00500-ip-10-31-129-80.ec2.internal.warc.gz | 170,904,386 | 9,845 | # Thread: help for a trigonometric problem
1. ## help for a trigonometric problem
Can anyone please give me some idea for this problem?
The instantaneous power, p, in an electric circuit is given by p = iv,
where v is the voltage and i is the current.
Calculate the maximum value of power in the circuit if v=0.02 sin(100 pi t)volts and i=0.6 sin(100pi t+ pi/4 ) amp
Calculate the first time that the power reaches a maximum value.
2. Hi
$p = iv=0.02 \sin(100 \pi t) \:0.6 \sin\left(100 \pi t+ \frac{\pi}{4}\right)$
Using
$\sin a \:\sin b = \frac12\\cos(a-b)-\cos(a+b))" alt="\sin a \:\sin b = \frac12\\cos(a-b)-\cos(a+b))" />
Spoiler:
$p = 0.006 \:\left(\cos\left(\frac{\pi}{4}\right) - \cos\left(200 \pi t+ \frac{\pi}{4}\right)\right)$
$p = 0.003 \:\sqrt{2} - 0.006\:\cos\left(200 \pi t+ \frac{\pi}{4}\right)$
The maximum is obtained when cos = -1
$p_{max} = 0.003 \:\sqrt{2} + 0.006 = 0.010\: W$
It is obtained for $200 \pi t+ \frac{\pi}{4} = \pi \implies t = \frac{3}{800} = 0.00375\:s$ | 371 | 1,001 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 6, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.09375 | 4 | CC-MAIN-2016-50 | longest | en | 0.662223 |
https://mycbseguide.com/blog/ncert-solutions-class-11-maths-exercise-6-2/ | 1,638,384,731,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964360881.12/warc/CC-MAIN-20211201173718-20211201203718-00541.warc.gz | 484,329,188 | 48,898 | # NCERT Solutions class-11 Maths Exercise 6.2
## myCBSEguide App
CBSE, NCERT, JEE Main, NEET-UG, NDA, Exam Papers, Question Bank, NCERT Solutions, Exemplars, Revision Notes, Free Videos, MCQ Tests & more.
Exercise 6.2
Solve the following inequalities graphically in two dimensional planes:
1.
Ans. Given:
Table of values satisfying the equation
1 2 4 3
Putting (0, 0) in the given in equation,
0 < 5
which is true.
Therefore, Half plane of towards origin.
2.
Ans. Given:
Table of values satisfying the equation
1 2 4 2
Putting (0, 0) in the given in equation,
which is false.
Therefore, Half plane of away from the origin.
3.
Ans. Given:
Table of values satisfying the equation
0 4 3 0
Putting (0, 0) in the given in equation,
which is true.
Therefore, Half plane of towards the origin.
4.
Ans. Given:
Table of values satisfying the equation
5 6 2 4
Putting (0, 0) in the given in equation,
which is true.
Therefore, Half plane of towards the origin.
5.
Ans. Given:
Table of values satisfying the equation
2 3 1 2
Putting (0, 0) in the given in equation,
which is true.
Therefore, Half plane of towards the origin.
6.
Ans. Given:
Table of values satisfying the equation
6 9 2 4
Putting (0, 0) in the given in equation,
which is false.
Therefore, Half plane of away from the origin.
7.
Ans. Given:
Table of values satisfying the equation
2 0 0
Putting (0, 0) in the given in equation,
which is true.
Therefore, Half plane of towards the origin.
8.
Ans. Given:
Table of values satisfying the equation
0 0 10
Putting (0, 0) in the given in equation,
which is true.
Therefore, Half plane of towards the origin.
9.
Ans. Given:
Table of values satisfying the equation
Putting (0, 0) in the given in equation,
which is false.
Therefore, Half plane of away from the origin.
10.
Ans. Given:
Putting (0, 0) in the given in equation,
which is true, therefore, Half plane of towards the origin.
### 4 thoughts on “NCERT Solutions class-11 Maths Exercise 6.2”
1. Nice
Solution
2. Very nice solution | 572 | 2,067 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.875 | 4 | CC-MAIN-2021-49 | longest | en | 0.874959 |
http://support.minitab.com/en-us/minitab/17/topic-library/basic-statistics-and-graphs/introductory-concepts/basic-concepts/central-limit-theorem/ | 1,496,090,880,000,000,000 | text/html | crawl-data/CC-MAIN-2017-22/segments/1495463612553.95/warc/CC-MAIN-20170529203855-20170529223855-00171.warc.gz | 430,756,928 | 3,690 | # The central limit theorem: The means of large, random samples are approximately normal
The central limit theorem is a fundamental theorem of probability and statistics. The theorem states that the distribution of , which is the mean of a random sample from a population with finite variance, is approximately normally distributed when the sample size is large, regardless of the shape of the population's distribution. Many common statistical procedures require data to be approximately normal, but the central limit theorem lets you to apply these useful procedures to populations that are strongly nonnormal. How large the sample size must be depends on the shape of the original distribution. If the population's distribution is symmetric, a sample size of 5 could yield a good approximation; if the population's distribution is strongly asymmetric, a larger sample size 50 or more is necessary. The following graphs show examples of how the distribution affects the sample size that you need.
A population that follows a uniform distribution is symmetric but strongly nonnormal, as the first histogram demonstrates. However, the distribution of 1000 sample means (n=5) from this population is approximately normal because of the central limit theorem, as the second histogram demonstrates. This histogram of sample means includes a superimposed normal curve to illustrate its normality.
A population that follows an exponential distribution is asymmetric and nonnormal, as the first histogram demonstrates. However, the distribution of sample means from 1000 samples of size 50 from this population is approximately normal because of the central limit theorem, as the second histogram demonstrates. This histogram of sample means includes a superimposed normal curve to illustrate its normality.
By using this site you agree to the use of cookies for analytics and personalized content. Read our policy | 345 | 1,913 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.53125 | 4 | CC-MAIN-2017-22 | latest | en | 0.942172 |
https://psichologyanswers.com/library/lecture/read/248019-what-is-n-in-a-survey | 1,675,687,880,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764500339.37/warc/CC-MAIN-20230206113934-20230206143934-00606.warc.gz | 486,407,237 | 7,684 | # What is N in a survey?
## What is N in a survey?
So what exactly is "a large number?" For a 95% confidence level (which means that there is only a 5% chance of your sample results differing from the true population average), a good estimate of the margin of error (or confidence interval) is given by 1/√N, where N is the number of participants or sample size (Niles, ...
## Is sample proportion the same as sample mean?
The sample proportion is a random variable: it varies from sample to sample in a way that cannot be predicted with certainty. Viewed as a random variable it will be written ˆP. It has a mean μˆP and a standard deviation σˆP. ... In the same way the sample proportion ˆp is the same as the sample mean ˉx.
## What does a proportion mean?
1 : harmonious relation of parts to each other or to the whole : balance, symmetry. 2a : proper or equal share each did her proportion of the work. b : quota, percentage. 3 : the relation of one part to another or to the whole with respect to magnitude, quantity, or degree : ratio.
p
## What is population proportion in sample size?
p′ = x / n where x represents the number of successes and n represents the sample size. The variable p′ is the sample proportion and serves as the point estimate for the true population proportion. The variable p′ has a binomial distribution that can be approximated with the normal distribution shown here.
## How do you select a sample from a population?
Methods of sampling from a population
1. Simple random sampling. In this case each individual is chosen entirely by chance and each member of the population has an equal chance, or probability, of being selected. ...
2. Systematic sampling. ...
3. Stratified sampling. ...
4. Clustered sampling. ...
5. Convenience sampling. ...
6. Quota sampling. ...
7. Judgement (or Purposive) Sampling. ...
8. Snowball sampling.
## How do you determine if there is a statistically significant difference?
Look up the normal distribution in a statistics table. Statistics tables can be found online or in statistics textbooks. Find the value for the intersection of the correct degrees of freedom and alpha. If this value is less than or equal to the chi-square value, the data is statistically significant.
200 samples
## What is a good sample size for qualitative research?
5 to 50 participants
## Why is sample size important in research?
What is sample size and why is it important? Sample size refers to the number of participants or observations included in a study. ... The size of a sample influences two statistical properties: 1) the precision of our estimates and 2) the power of the study to draw conclusions.
## Does sample size matter in qualitative research?
Qualitative analyses typically require a smaller sample size than quantitative analyses. Qualitative sample sizes should be large enough to obtain enough data to sufficiently describe the phenomenon of interest and address the research questions.
## What sample size is used in quantitative research?
If the research has a relational survey design, the sample size should not be less than 30. Causal-comparative and experimental studies require more than 50 samples. In survey research, 100 samples should be identified for each major sub-group in the population and between 20 to 50 samples for each minor sub-group.
## What is replicable in quantitative research?
Replicability means obtaining consistent results across studies aimed at answering the same scientific question using new data or other new computational methods. One typically expects reproducibility in computational results, but expectations about replicability are more nuanced.
## What is sampling procedure in quantitative research?
A researcher divides a study population into relevant subgroups then draws a sample from each subgroup. ... A researcher begins by sampling groups of population elements and then selects elements from within those groups. A cluster sampling technique in which each cluster is given a chance of selection based on its size.
## How do you determine sample size in quantitative research?
How to Determine the Sample Size in a Quantitative Research Study
1. Choose an appropriate significance level (alpha value). An alpha value of p = . ...
2. Select the power level. Typically a power level of . ...
3. Estimate the effect size. Generally, a moderate to large effect size of 0. | 904 | 4,420 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.65625 | 4 | CC-MAIN-2023-06 | latest | en | 0.919729 |
http://applmathmech.cqjtu.edu.cn/en/article/doi/10.3879/j.issn.1000-0887.2011.01.009 | 1,718,542,690,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861659.47/warc/CC-MAIN-20240616105959-20240616135959-00205.warc.gz | 2,123,475 | 14,089 | ZHANG Can-hui, WANG Dong-dong, LI Tong-shan. Orthogonal Basic Deformation Mode Method for Zero-Energy Mode Suppression of Hybrid Stress Element[J]. Applied Mathematics and Mechanics, 2011, 32(1): 79-92. doi: 10.3879/j.issn.1000-0887.2011.01.009
Citation: ZHANG Can-hui, WANG Dong-dong, LI Tong-shan. Orthogonal Basic Deformation Mode Method for Zero-Energy Mode Suppression of Hybrid Stress Element[J]. Applied Mathematics and Mechanics, 2011, 32(1): 79-92.
# Orthogonal Basic Deformation Mode Method for Zero-Energy Mode Suppression of Hybrid Stress Element
##### doi: 10.3879/j.issn.1000-0887.2011.01.009
• Rev Recd Date: 2010-11-30
• Publish Date: 2011-01-15
• A set of basic deformation modes for hybrid stress finite element were directly derived from the element displacement field.Subsequently by employing the so-called united orthogonal conditions a new orthogonalization method was also proposed.The resulting orthogonal basic deformation modes exhibit simple and clear physical meanings.In addition,they do not involve any material parameters and thus can be efficiently used to examine the element performance and serve as a unified tool to assess different hybrid elements.Therafter a convenient approach for identification of spurious zero-energy modes was presented through using the positive definiteness property of flexibility matrix.Moreover,based upon the orthogonality relationship between the given initial stress modes and the orthogonal basic deformation modes,an alternative method of assumed stress modes to formulate a hybrid element free of spurious modes was discussed.It was also found that the orthogonality of the basic deformation modes was the sufficient and necessary condition for suppression of spurious zero-energy modes.Numerical examples of 2D 4-node quadrilateral element and 3D 8-node hexahedral element were illustrated in details to demonstrate the efficacy of the proposed orthogonal basic deformation mode method.
• [1] Pian T H H. Derivation of element stiffness matrices[J], AIAA Journal, 1964, 2(3): 576-577. [2] Chen W J, Cheung Y K. Nonconforming element method and refined hybrid element method for axisymmetric solid[J]. International Journal for Numerical Methods in Engineering, 1996, 39(15): 2509-2529. [3] Sze K Y. Admissible matrix formulation-from orthogonal approach to explicit hybrid stabilization[J]. Finite Elements in Analysis and Design, 1996, 24(1): 1-30. [4] 张灿辉, 冯伟, 黄黔. 杂交应力元的应力子空间和柔度矩阵H对角化方法[J]. 应用数学和力学, 2002, 23(11): 1124- 1132.(ZHANG Can-hui, FENG Wei, HUANG Qian. The stress subspace of hybrid stress element and the diagonalization method for flexibility matrix H[J]. Applied Mathematics and Mechanics (English Edition), 2002, 23(11): 1263- 1273.) [5] 张灿辉, 冯伟, 黄黔. 用单元柔性矩阵H对角化方法建立杂交应力有限单元[J]. 计算力学学报, 2002, 19(4): 409-413.(ZHANG Can-hui, FENG Wei, HUANG Qian. A method of flexibility matrix H diagonalization for constructing hybrid stress finite elements[J]. Chinese Journal of Computational Mechanics, 2002, 19(4): 409-413.(in Chinese)) [6] Tian Z, Zhao F, Yang Q. Straight free-edge effects in laminated composites[J]. Finite Elements in Analysis and Design, 2004, 41(1): 1-14. [7] Zhang C, Wang D, Zhang J, Feng W, Huang Q. On the equivalence of various hybrid finite elements and a new orthogonalization method for explicit element stiffness formulation[J]. Finite Elements in Analysis and Design, 2007, 43(4): 321-332. [8] 张灿辉, 王东东, 张建霖. 三维杂交应力元性能分析的基本变形模式方法[J]. 工程力学, 2009, 26(8): 44-49.(ZHANG Can-hui, WANG Dong-dong, ZHANG Jian-lin. Performance analysis of 3D hybrid stress elements with a basic deformation-based approach[J]. Engineering Mechanics, 2009, 26(8): 44-49.(in Chinese)) [9] Pian T H H, Wu C C. Hybrid and Incompatible Finite Element Methods[M]. Boca Raton: Chapman & Hall/CRC Press, 2006. [10] Babuska I, Oden J T, Lee J K. Mixed-hybrid finite element approximation of second-order elliptic boundary-value problems[J]. Computer Methods in Applied Mechanics and Engineering, 1977, 11(2): 175-206. [11] Pian T H H, Chen D P. On the suppression of zero-energy deformation modes[J]. International Journal for Numerical Methods in Engineering, 1983, 19(12): 1741-1752. [12] Pian T H H, Sumihara K. Rational approach for assumed stress finite elements[J]. International Journal for Numerical Methods in Engineering, 1984, 20(9): 1685-1965. [13] Pian T H H, Wu C C. A rational approach for choosing stress terms of hybrid finite element formulations[J]. International Journal for Numerical Methods in Engineering, 1988, 26(10): 2331-2343. [14] HUANG Qian. Modal analysis of deformable bodies with finite degree of deformation freedom-an approach to determination of natural stress modes in hybrid finite elements[C]Chien Wei-zang, FU Zi-zhi. Advances in Applied Mathematics & Mechanics in China. Beijing: IAP (International Academic Publishers), 1991, 3: 283-303. [15] Feng W , Hoa S V, Huang Q. Classification of stress modes in assumed stress fields of hybrid finite elements[J]. International Journal for Numerical Methods in Engineering, 1997, 40(23): 4313-4339. [16] 张灿辉, 王东东. 一种抑制杂交元零能模式的假设应力场方法[J]. 固体力学学报, 2010, 31(1): 40-47.(ZHANG Can-hui, WANG Dong-dong. An assumed stress method for zero-energy mode suppression in hybrid finite elements[J]. Chinese Journal of Solid Mechanics, 2010, 31(1): 40-47.(in Chinese)) [17] 张灿辉, 冯伟, 黄黔. 杂交元假设应力模式的变形刚度分析[J]. 应用数学和力学, 2006, 27(7): 757-764.(ZHANG Can-hui, FENG Wei, HUANG Qian. Deformation rigidity of assumed stress modes in hybrid elements[J]. Applied Mathematics and Mechanics(English Edition), 2006, 27(7): 861-869.) [18] Han J, Hoa S V. A three-dimensional multilayer composite finite element for stress analysis of composite laminates[J]. International Journal for Numerical Methods in Engineering, 1993,36(22): 3903-3914. [19] Rubinstein R, Punch E F, Atluri S N. An analysis of, and remedies for, kinematic modes in hybrid-stress finite elements: selection of stable, invariant stress fields[J]. Computer Methods in Applied Mechanics and Engineering, 1983, 38(1): 63-92.
### Catalog
###### 通讯作者: 陈斌, bchen63@163.com
• 1.
沈阳化工大学材料科学与工程学院 沈阳 110142 | 1,759 | 6,103 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2024-26 | latest | en | 0.834126 |
https://math.stackexchange.com/questions/3600863/does-this-sum-involving-the-central-binomial-coefficient-have-a-closed-form-expr?noredirect=1 | 1,601,357,389,000,000,000 | text/html | crawl-data/CC-MAIN-2020-40/segments/1600401624636.80/warc/CC-MAIN-20200929025239-20200929055239-00078.warc.gz | 462,281,415 | 27,686 | # Does this sum involving the central binomial coefficient have a closed form expression? [duplicate]
Since Lehmer we know that $$\sum_{n\geq1} \binom{2n}{n}^{-1} = \frac{2\pi\sqrt3}{27} + \frac43,$$ which is due to the identity $$\sum_{n\geq1} x^n \binom{2n}{n}^{-1} = \int_0^1 \frac{x(1-t)}{(1-xt(1-t))^2} dt.$$ Substituting different values of $$x$$ above gives a variety of remarkable formulae.
However, I have not found similar expressions for any $$\sum_{n\geq1} \binom{2n}{n}^{-k}$$ with $$k>1$$. In particular, I am interested in $$\sum_{n\geq1} \binom{2n}{n}^{-2}$$. Unfortunately I cannot get a better ''closed form'' expression for this sum than the hypergeometric series $$\frac14{}_3F_2(1,2,2;\frac32,\frac32;\frac{1}{16})$$.
Does a closed form expression exist for $$\sum_{n\geq1} > \binom{2n}{n}^{-2}$$?
• Seems related math.stackexchange.com/questions/2833496/… – Sil Mar 29 at 22:03
• Function.wolfram.com gives no closed form to that 3F2. – User Mar 30 at 4:27
• @Sil Yes, it does. Thank you. – Klangen Mar 30 at 9:22
• @Sil: wow, I did not remember posting the same qeustion 2 years ago. Strange! – Klangen Mar 30 at 9:25 | 408 | 1,144 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.796875 | 3 | CC-MAIN-2020-40 | latest | en | 0.768671 |
http://lbartman.com/worksheet/printable-division-worksheets-3rd-grade.php | 1,553,656,976,000,000,000 | text/html | crawl-data/CC-MAIN-2019-13/segments/1552912207618.95/warc/CC-MAIN-20190327020750-20190327042750-00172.warc.gz | 122,746,389 | 12,172 | ## lbartman.com - the pro math teacher
• Subtraction
• Multiplication
• Division
• Decimal
• Time
• Line Number
• Fractions
• Math Word Problem
• Kindergarten
• a + b + c
a - b - c
a x b x c
a : b : c
# Printable Division Worksheets 3rd Grade
Public on 05 Oct, 2016 by Cyun Lee
### our favorite division worksheets for 3rd grade education
Name : __________________
Seat Num. : __________________
Date : __________________
589 : 3 = ...
489 : 5 = ...
184 : 9 = ...
701 : 8 = ...
700 : 5 = ...
170 : 9 = ...
622 : 7 = ...
541 : 8 = ...
160 : 1 = ...
963 : 1 = ...
832 : 7 = ...
841 : 3 = ...
251 : 5 = ...
809 : 1 = ...
110 : 9 = ...
686 : 7 = ...
793 : 9 = ...
886 : 9 = ...
179 : 8 = ...
719 : 7 = ...
748 : 4 = ...
456 : 8 = ...
515 : 9 = ...
293 : 4 = ...
364 : 6 = ...
883 : 8 = ...
620 : 6 = ...
491 : 7 = ...
543 : 6 = ...
193 : 5 = ...
127 : 7 = ...
530 : 8 = ...
742 : 7 = ...
463 : 8 = ...
484 : 5 = ...
528 : 6 = ...
250 : 4 = ...
986 : 7 = ...
150 : 6 = ...
637 : 1 = ...
735 : 1 = ...
755 : 3 = ...
191 : 3 = ...
408 : 6 = ...
363 : 9 = ...
126 : 8 = ...
490 : 3 = ...
632 : 9 = ...
597 : 3 = ...
629 : 9 = ...
701 : 4 = ...
498 : 6 = ...
549 : 6 = ...
141 : 2 = ...
363 : 6 = ...
718 : 7 = ...
242 : 7 = ...
273 : 7 = ...
553 : 9 = ...
423 : 5 = ...
602 : 6 = ...
851 : 6 = ...
511 : 5 = ...
505 : 6 = ...
266 : 5 = ...
863 : 6 = ...
248 : 2 = ...
629 : 2 = ...
273 : 2 = ...
353 : 4 = ...
498 : 1 = ...
760 : 9 = ...
470 : 5 = ...
234 : 6 = ...
753 : 4 = ...
677 : 5 = ...
338 : 5 = ...
698 : 8 = ...
132 : 8 = ...
620 : 5 = ...
960 : 5 = ...
402 : 4 = ...
557 : 8 = ...
657 : 1 = ...
679 : 9 = ...
119 : 4 = ...
897 : 4 = ...
901 : 9 = ...
271 : 8 = ...
464 : 4 = ...
353 : 3 = ...
531 : 1 = ...
418 : 5 = ...
913 : 3 = ...
175 : 6 = ...
124 : 2 = ...
315 : 2 = ...
469 : 9 = ...
452 : 2 = ...
436 : 6 = ...
345 : 4 = ...
181 : 4 = ...
224 : 4 = ...
611 : 3 = ...
475 : 5 = ...
350 : 6 = ...
838 : 8 = ...
655 : 8 = ...
279 : 9 = ...
789 : 3 = ...
225 : 7 = ...
246 : 9 = ...
344 : 9 = ...
443 : 6 = ...
530 : 2 = ...
317 : 7 = ...
596 : 4 = ...
477 : 1 = ...
133 : 4 = ...
304 : 5 = ...
860 : 4 = ...
544 : 6 = ...
833 : 2 = ...
114 : 4 = ...
990 : 1 = ...
327 : 2 = ...
480 : 9 = ...
171 : 1 = ...
388 : 8 = ...
354 : 5 = ...
191 : 9 = ...
786 : 9 = ...
177 : 1 = ...
286 : 1 = ...
440 : 1 = ...
443 : 9 = ...
216 : 2 = ...
803 : 6 = ...
450 : 6 = ...
135 : 4 = ...
347 : 1 = ...
732 : 5 = ...
765 : 7 = ...
647 : 8 = ...
909 : 3 = ...
803 : 1 = ...
198 : 4 = ...
620 : 7 = ...
881 : 6 = ...
721 : 1 = ...
341 : 3 = ...
726 : 9 = ...
315 : 8 = ...
872 : 6 = ...
746 : 4 = ...
362 : 9 = ...
553 : 8 = ...
717 : 6 = ...
569 : 2 = ...
729 : 7 = ...
928 : 1 = ...
624 : 7 = ...
761 : 6 = ...
103 : 4 = ...
373 : 6 = ...
542 : 6 = ...
119 : 6 = ...
714 : 6 = ...
531 : 4 = ...
620 : 5 = ...
462 : 9 = ...
965 : 8 = ...
562 : 4 = ...
679 : 6 = ...
891 : 3 = ...
690 : 9 = ...
635 : 4 = ...
106 : 2 = ...
794 : 2 = ...
818 : 7 = ...
866 : 7 = ...
248 : 8 = ...
157 : 2 = ...
142 : 1 = ...
485 : 8 = ...
894 : 7 = ...
556 : 1 = ...
771 : 5 = ...
182 : 6 = ...
284 : 5 = ...
579 : 3 = ...
126 : 8 = ...
265 : 3 = ...
530 : 8 = ...
646 : 7 = ...
248 : 6 = ...
563 : 4 = ...
586 : 5 = ...
321 : 1 = ...
283 : 4 = ...
show printable version !!!hide the show
## RELATED POST
Not Available
## POPULAR
math worksheets for division
maths algebra worksheets
fraction in simplest form worksheet
make math worksheets | 1,355 | 3,631 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2019-13 | longest | en | 0.175416 |
https://datatofish.com/category/excel-2016/ | 1,571,173,161,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986660323.32/warc/CC-MAIN-20191015205352-20191015232852-00228.warc.gz | 455,740,430 | 37,423 | ## How to Count Duplicates in Excel using COUNTIF
You can use the COUNTIF function to count duplicates in Excel: =COUNTIF(range, criteria) In this short post, I’ll review a simple example with the steps to count duplicates for a given dataset. Steps to Count Duplicates in Excel using COUNTIF Step 1: Prepare the data that contains the duplicates To start, let’s say that you have the following …
## Guide to an IF Function in Excel 2016
In this guide, I’m going to show you how to apply an IF function in Excel 2016. Note that similar principles apply to previous versions of Excel. In general, IF functions allow you to perform logical tests in Excel. So what type of logical tests are we going to see in this guide? We’ll review the …
## Excel String Functions: LEFT, RIGHT, MID, LEN and FIND
Need to retrieve specific characters from a string in Excel? If so, in this guide, I’ll show you how to use the Excel string functions to obtain your desired characters within a string. Specifically, I will use examples to illustrate how to apply the following Excel string functions: Excel String Functions Used Description of Operation LEFT Retrieve …
## How to use VLOOKUP in Excel 2016 (example included)
VLOOKUP is a powerful function in Excel. In this tutorial, I’ll show you how to use VLOOKUP in Excel 2016. Specifically, I’ll review an example with the steps needed to apply a VLOOKUP. But before we begin, let’s first review the elements of the VLOOKUP function. Elements of the VLOOKUP in Excel 2016 The VLOOKUP …
## How to Create a Column Chart in Excel 2016
Need to create a column chart in Excel? In this post, I’m going to show you the steps needed in order to create a column chart in Excel 2016. Excel 2016 offers additional features that you can use in order to create a fancy column chart. To explore those features, we will review a simple …
## How to Create a Drop-Down List in Excel
Need to create a drop-down list in Excel? If so, in this post I’ll show you the steps to create a drop-down list in Excel 2016. The example To start, let’s review an example where you have 5 simple tasks that you want to track: (1) Place a purchase order to buy a computer (2) Pay …
## How to Create a Pie Chart in Excel (with example)
Need to create a pie chart in Excel? If so, I’ll show you the steps to create a pie chart using a simple example. By the end of this short guide, you’ll be able to create the following chart: Steps to Create a Pie Chart in Excel Step 1: Gather the data for the pie …
## How to Create a Pivot Table in Excel
Pivot table is a powerful tool that can help you summarize and organize your data in an efficient manner. In this guide, I’ll review a simple example with the steps needed to create a pivot table in Excel. Example of a Pivot Table in Excel To start, let’s say that you have a small company … | 666 | 2,850 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2019-43 | longest | en | 0.761311 |
https://www.finance-tutoring.fr/blog/ | 1,723,114,487,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640726723.42/warc/CC-MAIN-20240808093647-20240808123647-00682.warc.gz | 592,394,851 | 15,026 | Collateralized Debt Obligations (CDOs) bundle debts into tranches with varying risk levels and returns. Correlation between these tranches is crucial as it affects their performance. Compound correlation assesses tranches independently, which can lead to incomplete risk profiles and mispricing. In contrast, base correlation, centered on the equity tranche which absorbs losses first, provides a more holistic view by considering inter-tranche dependencies, ensuring more accurate risk assessment.
Cholesky decomposition plays a critical role in pricing Collateralized Debt Obligations (CDOs) by transforming independent variables into correlated ones based on a given correlation matrix. This method helps simulate scenarios of correlated defaults, essential for assessing risks and determining expected losses in CDO tranches.
It is essential to understand the risk of simultaneous default by several entities, particularly when it comes to credit derivatives such as basket credit default swaps. The joint probability of default and its correlation are key to understanding this risk. The correlation between two random variables X and Y is given by the formula : correlation(X, Y) = covariance(X, Y) / (σ_X * σ_Y) Where: - covariance(X, Y) is the covariance between X and Y. - sigma_X and sigma_Y are the standard...
Si vous considérez un processus de Wiener Wₜ, et le multipliez par son intégrale ∫ de 0 à t Wₛ ds, vous obtenez un produit de deux processus stochastiques : Wₜ ⋅ ∫ de 0 à t Wₛ ds. Le produit Wₜ ⋅ ∫ de 0 à t Wₛ ds est une fonction non linéaire du processus de Wiener. En calcul stochastique, traiter des fonctions non linéaires de processus stochastiques nécessite généralement des outils comme le lemme d'Itô, qui permet la différenciation et l'intégration de...
Le modèle de Black-Scholes calcule la valeur théorique des options de style européen, en supposant que les prix des actions suivent une distribution log-normale. Le modèle est connu pour sa volatilité constante, l'absence de paiements de dividendes, et son utilisation innovante du calcul stochastique, influençant de manière significative la finance théorique et la pratique du trading. La formule pour une option d'achat européenne est : C = S₀ * N(d₁) - X * e^(-rT) * N(d₂) Où : -...
La sous-additivité est un principe en gestion des risques qui suggère que la combinaison de deux ou plusieurs actifs risqués ne doit pas entraîner un risque total supérieur à la somme des risques individuels. Ce concept repose sur l'idée que la diversification réduit généralement le risque. La formule de la sous-additivité peut être écrite comme suit : ρ(A + B) ≤ ρ(A) + ρ(B) où "ρ" représente la mesure du risque, et A et B représentent différents actifs ou portefeuilles....
10. juin 2024
Le Théorème de Sklar, introduit en 1959, a révolutionné l'analyse multivariée en permettant la modélisation séparée des distributions individuelles et de leurs interdépendances, redéfinissant ainsi la modélisation probabiliste et la gestion des risques. Considérons un ensemble de variables aléatoires, X1, X2, ..., XN. Chaque variable a son propre comportement, modélisé par une fonction de distribution, notée F_Xi(x) pour la ième variable. Ces fonctions, connues sous le nom de...
La formule de Black-Scholes pour une option d'achat est donnée par : C = S * N(d1) - K * e^(-rt) * N(d2). Dans cette formule, C représente le prix de l'option d'achat, S est le prix actuel de l'action, et K est le prix d'exercice de l'option. Les termes N(d1) et N(d2) proviennent de la fonction de distribution cumulative (CDF) de la distribution normale standard, où la CDF indique la probabilité qu'une variable soit inférieure ou égale à une valeur particulière, résumant l'accumulation...
De nombreux modèles financiers, notamment ceux qui traitent de la tarification des dérivés ou de la gestion des risques, sont basés sur des processus en temps continu comme le mouvement brownien. La discrétisation aide à convertir ces modèles continus en une forme qui peut être calculée numériquement. Pour simuler les dynamiques du marché pour des tâches telles que la tarification des options, l'optimisation de portefeuille ou l'évaluation des risques, les processus continus sont...
Lorsqu'on explore la probabilité d'événements, tels que les défauts de paiement sur les obligations, il est crucial de comprendre que connaître les probabilités individuelles, ou les distributions marginales, de chaque défaut d'obligation ne nous informe pas nécessairement sur la probabilité que plusieurs obligations fassent défaut en même temps. Ce concept est essentiel car, même si deux ensembles d'obligations ont des probabilités marginales identiques, leurs probabilités...
Afficher plus
FINANCE TUTORING
Organisme de Formation Enregistré sous le Numéro 24280185328
Contact : Florian CAMPUZAN Téléphone : 0680319332
E-mail : fcampuzan@finance-tutoring.fr
© 2024 FINANCE TUTORING, Tous Droits Réservés | 1,281 | 4,945 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2024-33 | latest | en | 0.511249 |
https://www.airmilescalculator.com/distance/ryk-to-dea/ | 1,653,197,607,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662543797.61/warc/CC-MAIN-20220522032543-20220522062543-00125.warc.gz | 713,276,612 | 12,856 | # Distance between Rahim Yar Khan (RYK) and Dera Ghazi Khan (DEA)
Flight distance from Rahim Yar Khan to Dera Ghazi Khan (Shaikh Zayed International Airport – Dera Ghazi Khan Airport) is 109 miles / 176 kilometers / 95 nautical miles. Estimated flight time is 42 minutes.
Driving distance from Rahim Yar Khan (RYK) to Dera Ghazi Khan (DEA) is 130 miles / 210 kilometers and travel time by car is about 3 hours 25 minutes.
109
Miles
176
Kilometers
95
Nautical miles
## How far is Dera Ghazi Khan from Rahim Yar Khan?
There are several ways to calculate distances between Los Angeles and Chicago. Here are two common methods:
Vincenty's formula (applied above)
• 109.331 miles
• 175.952 kilometers
• 95.006 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth’s surface, using an ellipsoidal model of the earth.
Haversine formula
• 109.676 miles
• 176.506 kilometers
• 95.306 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
## How long does it take to fly from Rahim Yar Khan to Dera Ghazi Khan?
Estimated flight time from Shaikh Zayed International Airport to Dera Ghazi Khan Airport is 42 minutes.
## What is the time difference between Rahim Yar Khan and Dera Ghazi Khan?
There is no time difference between Rahim Yar Khan and Dera Ghazi Khan.
## Flight carbon footprint between Shaikh Zayed International Airport (RYK) and Dera Ghazi Khan Airport (DEA)
On average flying from Rahim Yar Khan to Dera Ghazi Khan generates about 41 kg of CO2 per passenger, 41 kilograms is equal to 91 pounds (lbs). The figures are estimates and include only the CO2 generated by burning jet fuel.
## Map of flight path and driving directions from Rahim Yar Khan to Dera Ghazi Khan
Shortest flight path between Shaikh Zayed International Airport (RYK) and Dera Ghazi Khan Airport (DEA).
## Airport information
Origin Shaikh Zayed International Airport
City: Rahim Yar Khan
Country: Pakistan
IATA Code: RYK
ICAO Code: OPRK
Coordinates: 28°23′2″N, 70°16′46″E
Destination Dera Ghazi Khan Airport
City: Dera Ghazi Khan
Country: Pakistan
IATA Code: DEA
ICAO Code: OPDG
Coordinates: 29°57′39″N, 70°29′9″E | 594 | 2,293 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2022-21 | latest | en | 0.876063 |
https://rapidrefillbelair.com/nc3v7w0v/calculate-the-mass-percent-composition-of-carbon-in-c3h6-8fd975 | 1,653,676,847,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662675072.99/warc/CC-MAIN-20220527174336-20220527204336-00059.warc.gz | 542,995,759 | 8,268 | # calculate the mass percent composition of carbon in c3h6
#C_2H_6# c. #C_2H_2# d. #C_2H_5Cl# Chemistry The Mole Concept Percent Composition. Click here to see a video of the solution (a) C2H2(b) C3H6(c) C2H6(d) C2H6O. C2H6O Mass percent composition describes the relative quantities of elements in a chemical compound. Weights of atoms and isotopes are from NIST article. Carbon would not have a way to enter the . Convert grams C3H8O to moles or moles C3H8O to grams. Molecular mass (molecular weight) is the mass of one molecule of a substance and is expressed in the unified atomic mass units (u). C2H6 4. This compound is also known as Isopropyl Alcohol. C3H4: Mass % C = 36.033 x 100 / 40.0641 = 89. 1. If 15.0 grams of C3H6, 10.0 grams of oxygen, and 5.00 grams of NH3 are reacted, what mass of acrylonitrile can be produced assuming 100% yield, what mass of the excess reactants remain? It has a molar mass of 99 g/mol. = 74.91; b. #CH_4# b. the mass of an element divided by the total mass of the compound and multiplied by 100% 7.37 calculate the mass percent composition of each of the following: a. molar mass = ( 3 x 12.011) + ( 8 x 1.008) = 36.033 + 8.064 = 44.097 g/mol % C = 36.033 x 100 / 44.097 = 81.7 % H = 18.3 C2H2 2. a. C2H2 b. C3H6 c. C2H6 d. C2H6O (1 u is equal to 1/12 the mass of one atom of carbon-12) Molar mass (molar weight) is the mass of one mole of a substance and is expressed in g/mol. (Atomic mass: C = 12, H = 1, O = 16) Welcome to Sarthaks eConnect: A unique platform where students can interact with teachers/experts/students to get solutions to their queries. C3H4 C2H4 C4H10 C2H6O I need them in four significant figures-Divide atomic weight of carbon by molecular weight of the compound and multiply it by 100. Molecular weight calculation: 12.0107*3 + 1.00794*8 + 15.9994 ›› Percent composition by element C3H6 3. Answer to Calculate the mass percent composition of carbon in each compound. Molar mass of C3H8O = 60.09502 g/mol. C 2 H 4 Cl 2. = 79.89; c. 92.26; d. 37.23. Analysis of a sample shows that it contains 24.3% carbon and 4.1% hydrogen. What is its molecular formula? a. 15g C3H6 (1 mol / 42.08 g) = .35 mol C3H6 How do you calculate the mass percent composition of carbon in each carbon-containing compound? 4.68 g of Si and 5.32 of O Calculate the mass percent composition of carbon in each the following carbon compounds - Science Mathematics Answer to Calculate the mass percent composition if carbon in each the following carbon compounds. It is abbreviated as w/w%. Answer . Dichloroethane, a compound that is often used for dry cleaning, contains carbon, hydrogen, and chlorine. Calculate percentage composition of glucose (C6H12O6). Calculate the mass percent composition of carbon in each compound. chemistry. Mass percent composition is also known percent by weight. 1 Answer Ernest Z. Oct 15, 2016 The percentages by mass are a. For a solution, mass percent equals the mass of an element in one mole of the compound divided by the molar mass of the compound, multiplied by 100%. Calculate the mass percent composition % hydrogen Ernest Z. Oct 15, 2016 the percentages by mass are a,! 1 mol / 42.08 g ) =.35 mol C3H6 Calculate the mass percent of... To Calculate the mass percent composition describes the relative quantities of elements a. From NIST article or moles C3H8O to grams answer to Calculate the mass percent composition of in... Carbon and 4.1 % hydrogen a way to enter the answer Ernest Z. 15! To grams each compound of atoms and isotopes are from NIST article mass. % hydrogen from NIST article is also known percent by weight compound that is often used for dry,! Carbon, hydrogen, and chlorine 24.3 % carbon and 4.1 % hydrogen Dichloroethane, a that. Carbon would not have a way to enter the are a answer to Calculate the mass percent composition is known... Composition is also known percent by weight dry cleaning, contains carbon, hydrogen and. Sample shows that it contains 24.3 % carbon and 4.1 % hydrogen not have a way to enter the %. A sample shows that it contains 24.3 % carbon and 4.1 % hydrogen it contains 24.3 % carbon and %... Grams C3H8O to grams the mass percent composition is also known percent weight... C_2H_6 # c. # C_2H_2 # d. # C_2H_5Cl # Chemistry the Mole Concept percent composition carbon. In each compound composition is also known percent by weight a sample shows that it contains 24.3 % carbon 4.1. 100 / 40.0641 = 89 compound that is often used for dry cleaning, contains carbon, hydrogen, chlorine. Contains 24.3 % carbon and 4.1 % hydrogen to enter the Z. Oct 15, 2016 the percentages by are! 100 / 40.0641 = 89 Concept percent composition to grams # C_2H_2 # d. C_2H_5Cl... By mass are a atoms and isotopes are from NIST article grams C3H8O to moles moles. Mol / 42.08 g ) =.35 mol C3H6 Calculate the mass percent composition is also percent!, and chlorine C3H8O to moles or moles C3H8O to moles or moles C3H8O grams. ( c ) C2H6 ( d ) C2H6O answer Ernest Z. Oct 15, the... A way to enter the composition describes the relative quantities of elements in a chemical compound used for dry,. Mol C3H6 Calculate the mass percent composition calculate the mass percent composition of carbon in c3h6 the relative quantities of elements in a compound. B ) C3H6 ( c ) C2H6 ( d ) C2H6O mass % c 36.033! 24.3 % carbon and 4.1 % hydrogen compound that is often used dry. A chemical compound ) C2H6O a sample shows that it contains 24.3 % and. ) C3H6 ( c ) C2H6 ( d ) C2H6O C2H6O Dichloroethane a! ( b ) C3H6 ( 1 mol / 42.08 g ) =.35 mol C3H6 Calculate the mass percent describes! And isotopes are from NIST article hydrogen, and chlorine each compound in a chemical.! Dichloroethane, a compound that is often used for dry cleaning, contains carbon, hydrogen and... Analysis of a sample shows that it contains 24.3 % carbon and 4.1 % hydrogen hydrogen and. C3H6 ( c ) C2H6 ( d ) calculate the mass percent composition of carbon in c3h6 in each compound, contains carbon hydrogen... ( c ) C2H6 ( d ) C2H6O = 89 c. # C_2H_2 # #! Weights of atoms and isotopes are from NIST article percent by weight carbon would have. X 100 / 40.0641 = 89 sample shows that it contains 24.3 carbon! Mass are a of atoms and isotopes are from NIST article 1 Ernest. B ) C3H6 ( c ) C2H6 ( d ) C2H6O d. # #! # d. # C_2H_5Cl # Chemistry the Mole Concept percent composition is also known percent by weight Chemistry... Enter the used for dry cleaning, contains carbon, hydrogen, and chlorine #! # d. # C_2H_5Cl # Chemistry the Mole Concept percent composition of carbon in each compound moles. Or moles C3H8O to moles or moles C3H8O to moles or moles to. C ) C2H6 ( d ) C2H6O ) =.35 mol C3H6 the! C_2H_6 # c. # C_2H_2 # d. # C_2H_5Cl # Chemistry the Mole Concept composition. Mass are a d ) C2H6O have a way to enter the d ) C2H6O 100. That is often used for dry cleaning, contains carbon, hydrogen, and chlorine c = x. Known percent by weight =.35 mol C3H6 Calculate the mass percent composition is known! C ) C2H6 ( d ) C2H6O percentages by mass are a Concept percent composition of elements in chemical... D ) C2H6O 24.3 % carbon and 4.1 % hydrogen enter the enter the the. ( a ) C2H2 ( b ) C3H6 ( 1 mol / 42.08 g =... In a chemical compound, 2016 the percentages by mass are a from NIST article a. = 89 Z. Oct 15, 2016 the percentages by mass are a c = 36.033 x 100 40.0641... Composition describes the relative quantities of elements in a chemical compound and chlorine # C_2H_5Cl # Chemistry the Concept., hydrogen, and chlorine weights of atoms and isotopes are from NIST article are from article. Relative quantities of elements in a chemical compound # C_2H_6 # c. C_2H_2... Mass % c = 36.033 x 100 / 40.0641 = 89 also known by! Have a way to enter the 4.1 % hydrogen dry cleaning, contains carbon,,! A sample shows that it contains 24.3 % carbon and 4.1 % hydrogen to Calculate the mass percent.... Mass are a % carbon and 4.1 % hydrogen or moles C3H8O to grams ) C2H6 ( d C2H6O..., and chlorine analysis of a sample shows that it contains 24.3 % carbon and %... For dry cleaning, contains carbon, hydrogen, and chlorine ) =.35 mol C3H6 Calculate the percent! Mass are a ) C2H6 ( d ) C2H6O c. # C_2H_2 # d. # #. Mol C3H6 Calculate the mass percent composition describes the relative quantities of elements in a compound! C3H6 Calculate the mass percent composition is also known percent by weight carbon would have. Analysis of a sample shows that it contains 24.3 % carbon and %. Elements in a chemical compound contains carbon, hydrogen, and chlorine ) C2H2 ( b C3H6. Compound that is often used for dry cleaning, contains carbon,,. By mass are a each compound by weight a way to enter the moles C3H8O to grams Oct 15 2016. By weight d. # C_2H_5Cl # Chemistry the Mole Concept percent composition is also known percent by.! B ) C3H6 ( c ) C2H6 ( d ) C2H6O ( mol. Composition is also known percent by weight percentages by mass are a have a way to the. Atoms and isotopes are from NIST article 15g C3H6 ( 1 mol / 42.08 g ) =.35 C3H6! Dichloroethane, a compound that is often used for dry cleaning, contains carbon, hydrogen, and.. | 2,712 | 9,120 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.15625 | 3 | CC-MAIN-2022-21 | latest | en | 0.866306 |
https://physics.stackexchange.com/questions/799170/static-friction-vs-kinetic-friction | 1,721,143,701,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514759.37/warc/CC-MAIN-20240716142214-20240716172214-00040.warc.gz | 412,672,053 | 42,597 | # Static friction vs Kinetic Friction
I might be asking a very elementary question here. How do we identify where static friction is acting or kinetic friction is acting?
For example, Consider this case:
Now, here we are talking about "sliding" a body along the ground with the least possible force.. If we are talking about sliding , then that would mean it will involve relative motion between the ground and the body.. causing kinetic friction. As $$\mu_s$$=$$\mu_k$$ , it does not cause much of a problem. However in the solution , it was given as follows :
When the block is about to start sliding, the frictional force acting on block reaches to its limiting value f =$$μ_s$$N
Why do we not directly take the case of sliding friction here? I mean , if $$\mu_s$$ would not have been given equal as $$\mu_k$$, then why would i go with limiting case instead of directly considering the case where the object is sliding?
• The reason they set static and kinetic friction the same in the problem, is precisely that they don't want to have to state the problem carefully, in terms of whether and when the block is moving or not (because it doesn't matter.) If the two coefficients were different, then presumably they would have stated the problem more carefully and the question wouldn't come up. Commented Jan 28 at 18:00
• Static friction acts whenever two surfaces are not moving relative to each other. When they are rubbing past on and other, you have kinetic friction, which is significantly lower. For instance, a sliding box experiences kinetic friction, and a ball rolling without slipping experiences static friction. Commented Jan 28 at 21:38
• It is odd to set the coefficient of static friction equal to that of kinetic friction. I believe they are trying to make it so you don't have to consider whether the block was initially moving or not. The solution addresses the case where the block started at rest, and to get it moving you have to overcome the static friction. I believe they use the word "limiting" because the friction experienced by the block decreases in proportion to the normal force (N). Since the pulling force has an upward component, it reduces the normal force: mg - P*sin(45) = N, where g = 9.8 m/s^2 Commented Jan 28 at 21:52
In general, static friction is greater than kinetic friction. That can be easily shown by placing an object on a flat surface (ceramic surface), and gradually increasing the angle of incline of the surface. A moment comes when the object starts moving downwards.
If kinetic friction were equal to the static one, we would see velocities as small as we wished, by inclining the surface very slowly. But experience tells us that when the static friction is overcome, the acceleration $$a = mg\ \sin(\theta) - f_a$$ jumps from zero to a finite value well above zero.
So, $$f_a$$ just after the object starts to move is smaller than it was before.
• Nice explanation, I would also include this chart from wiki. Commented Jan 27 at 23:10
• So like , how do we identify when static force is acting and when kinetic friction is acting? The textbooks describe this in term of "Relative sliding" between the two surfaces.... Here in this question as well , there must be a relative sliding right ( as we are sliding an object along the ground) , so why do we consider static friction to be acting here? Why are we considering the first case onlyy? Rest of the answer was perfect, thanks. Commented Jan 28 at 5:02
In general, the type of friction to calculate with can be determined by assessing which of the following situations the physics problem is about:
• The objects with friction are not sliding, and will continue not sliding.
• The objects with friction are initially not sliding, but transition into sliding.
• The objects with friction are already sliding.
The first and last of those situations are obvious - sliding uses kinetic friction, and not sliding uses static friction. The middle one, with a transition into sliding, is trickier to understand, but once you do understand it is equally simple.
When a physics problem is about the transition from not sliding to sliding, it uses static friction. This is because there is only one question to be asked about such a situation: What force is needed to cause the transition to occur? Or, practically equivalently, is a given force sufficient to cause the transition or not? All other questions about friction (at the level of analysis depth that static and kinetic friction are relevant to) fall into one or the other of the other two situations.
For any force too small to cause a transition to sliding, the objects will continue not sliding, so obviously static friction applies. Therefore, any force less than static friction will not cause a transition to sliding. Accordingly, calculating the necessary force to cause the transition uses static friction.
For your example problem, it is actually two problems combined.
The first one, "Find the least pulling force which [...] will slide", is a textbook example of a transition problem. It is telling you to find how much force is necessary to cause the transition from not sliding to sliding. It therefore uses static friction.
The other one, "find the resulting acceleration", is about sliding. The setup of the problem includes a transition, but this part of it is about what happens after the transition. It instructs you to find acceleration, and non-zero acceleration of a non rolling object relative to a surface it has friction with can only happen with sliding. It therefore uses kinetic friction.
• Thank you!! It answers my question! Commented Jan 29 at 5:41
Your question is more involved with language and terminology as you seem to be new to these phrases like "just after" , " just before" etc.
In mathematics, it's senseless to say something like " a real number just greater than 4" because such a real number does not exist. But in physics , authors usually don't care about these formal things. The concept of limits is not taken with much caution here.
• Um yeah I am having problem with these things only....... Commented Jan 28 at 15:08 | 1,293 | 6,154 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 7, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.84375 | 4 | CC-MAIN-2024-30 | latest | en | 0.966698 |
https://classace.io/learn/math/3rdgrade/place-values-to-millions | 1,623,884,296,000,000,000 | text/html | crawl-data/CC-MAIN-2021-25/segments/1623487626122.27/warc/CC-MAIN-20210616220531-20210617010531-00171.warc.gz | 170,244,507 | 29,090 | Place Values up to Millions
## Learn About Place Values up to Millions
Some numbers have one digit, like 1.
Some number have two digits, like 25.
Some numbers have three, four, or even more digits! 🙉
Every digit in a number has a different place value.
In the last lesson you learned about place values up to the Thousands
But what about bigger numbers? 🤔
Do you know what place value comes after Thousand?
### Ten Thousands
Let’s take a look:
A five-digit number has a digit in the Ten Thousands place!
Let's look at this 5-digit number:
45,678
Do you know which place value each digit is in?
That’s it! 💪
Now, let's make sure you get the idea.
Here’s a new number:
95,615
Which digit is in the Ten Thousands place?
9! That’s it!
Which digit is in the Thousands place?
5! You got it!
### Hundred Thousands
A 6-digit number will have a digit in the Hundred Thousands place.
Take a look:
Can you identify the place values for the digits in this number?
438,591
Let’s put this number in our table!
Great job! Now, you can see the place value of each number.
It has 4 Hundred Thousands!
Let’s look at another number:
816,590
Which digit is in the Hundred Thousands place?
8! That’s it! ✅
What about the Ten Thousands place?
1! You’re right! ✅
Which number is in the Tens place?
9! Very good! ✅
### Millions Place
Now, let’s take a look at a 7-digit number!
The 7th digit is in the Millions place! Take a look at the place value chart here:
Try identifying the place values for this number:
4,580,324
Let’s put it on our place value chart:
Great job!
👍 Tip: always write a comma "," after the millions place.
Fun Fact: People who have over a million dollars are called millionaires. You can be a millionaire too! Just keep doing your best every day and believe in yourself.
Now, let's look at another 7-digit number:
7,561,390
What number is in the Ten Thousands place?
6! That’s it! ✅
What number is in the Millions place?
7! You’re on a roll! ✅
What number is in the Hundreds place?
3! Great work! ✅
Now, you know how to identify numbers up to the Millions place!
### Watch and Learn
You can move on to the practice problems now! 💪
# Our self-paced K6 learning platform was built with kids in mind, but parents and teachers love it too.
### Start a 7 day free trial.
Save an extra 10% off with code LEARN10
### Monthly
Full Access
\$6.99/mo.
7 day free trial
Unlimited learning
Cancel any time
### Yearly
Save 52% Off
\$83.88/yr.
\$39.99/yr.
7 day free trial
Unlimited learning
Cancel any time
30-day money-back guarantee
Our mission is to create the world’s most effective learning platform, so children of all learning abilities and backgrounds can develop to their greatest potential.
We offer affordable, effective Common Core learning plans for districts. | 708 | 2,825 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.375 | 4 | CC-MAIN-2021-25 | latest | en | 0.855637 |
http://nrich.maths.org/public/leg.php?code=6&cl=3&cldcmpid=682 | 1,506,015,846,000,000,000 | text/html | crawl-data/CC-MAIN-2017-39/segments/1505818687834.17/warc/CC-MAIN-20170921172227-20170921192227-00010.warc.gz | 239,630,154 | 10,107 | Search by Topic
Resources tagged with Place value similar to As Easy as 1,2,3:
Filter by: Content type:
Stage:
Challenge level:
There are 56 results
Broad Topics > Numbers and the Number System > Place value
Digit Sum
Stage: 3 Challenge Level:
What is the sum of all the digits in all the integers from one to one million?
Arrange the Digits
Stage: 3 Challenge Level:
Can you arrange the digits 1,2,3,4,5,6,7,8,9 into three 3-digit numbers such that their total is close to 1500?
X Marks the Spot
Stage: 3 Challenge Level:
When the number x 1 x x x is multiplied by 417 this gives the answer 9 x x x 0 5 7. Find the missing digits, each of which is represented by an "x" .
Even Up
Stage: 3 Challenge Level:
Consider all of the five digit numbers which we can form using only the digits 2, 4, 6 and 8. If these numbers are arranged in ascending order, what is the 512th number?
Basically
Stage: 3 Challenge Level:
The number 3723(in base 10) is written as 123 in another base. What is that base?
Six Times Five
Stage: 3 Challenge Level:
How many six digit numbers are there which DO NOT contain a 5?
Just Repeat
Stage: 3 Challenge Level:
Think of any three-digit number. Repeat the digits. The 6-digit number that you end up with is divisible by 91. Is this a coincidence?
Stage: 2 and 3 Challenge Level:
Watch our videos of multiplication methods that you may not have met before. Can you make sense of them?
Permute It
Stage: 3 Challenge Level:
Take the numbers 1, 2, 3, 4 and 5 and imagine them written down in every possible order to give 5 digit numbers. Find the sum of the resulting numbers.
Not a Polite Question
Stage: 3 Challenge Level:
When asked how old she was, the teacher replied: My age in years is not prime but odd and when reversed and added to my age you have a perfect square...
Lesser Digits
Stage: 3 Challenge Level:
How many positive integers less than or equal to 4000 can be written down without using the digits 7, 8 or 9?
Reasoned Rounding
Stage: 1, 2 and 3 Challenge Level:
Four strategy dice games to consolidate pupils' understanding of rounding.
Skeleton
Stage: 3 Challenge Level:
Amazing as it may seem the three fives remaining in the following `skeleton' are sufficient to reconstruct the entire long division sum.
Exploring Simple Mappings
Stage: 3 Challenge Level:
Explore the relationship between simple linear functions and their graphs.
Three Times Seven
Stage: 3 Challenge Level:
A three digit number abc is always divisible by 7 when 2a+3b+c is divisible by 7. Why?
Pupils' Recording or Pupils Recording
Stage: 1, 2 and 3
This article, written for teachers, looks at the different kinds of recordings encountered in Primary Mathematics lessons and the importance of not jumping to conclusions!
Back to the Planet of Vuvv
Stage: 3 Challenge Level:
There are two forms of counting on Vuvv - Zios count in base 3 and Zepts count in base 7. One day four of these creatures, two Zios and two Zepts, sat on the summit of a hill to count the legs of. . . .
Stage: 3 Challenge Level:
Powers of numbers behave in surprising ways. Take a look at some of these and try to explain why they are true.
Phew I'm Factored
Stage: 4 Challenge Level:
Explore the factors of the numbers which are written as 10101 in different number bases. Prove that the numbers 10201, 11011 and 10101 are composite in any base.
Repeaters
Stage: 3 Challenge Level:
Choose any 3 digits and make a 6 digit number by repeating the 3 digits in the same order (e.g. 594594). Explain why whatever digits you choose the number will always be divisible by 7, 11 and 13.
What an Odd Fact(or)
Stage: 3 Challenge Level:
Can you show that 1^99 + 2^99 + 3^99 + 4^99 + 5^99 is divisible by 5?
Eleven
Stage: 3 Challenge Level:
Replace each letter with a digit to make this addition correct.
Football Sum
Stage: 3 Challenge Level:
Find the values of the nine letters in the sum: FOOT + BALL = GAME
Tis Unique
Stage: 3 Challenge Level:
This addition sum uses all ten digits 0, 1, 2...9 exactly once. Find the sum and show that the one you give is the only possibility.
Mini-max
Stage: 3 Challenge Level:
Consider all two digit numbers (10, 11, . . . ,99). In writing down all these numbers, which digits occur least often, and which occur most often ? What about three digit numbers, four digit numbers. . . .
Legs Eleven
Stage: 3 Challenge Level:
Take any four digit number. Move the first digit to the 'back of the queue' and move the rest along. Now add your two numbers. What properties do your answers always have?
Latin Numbers
Stage: 4 Challenge Level:
Can you create a Latin Square from multiples of a six digit number?
Multiplication Magic
Stage: 4 Challenge Level:
Given any 3 digit number you can use the given digits and name another number which is divisible by 37 (e.g. given 628 you say 628371 is divisible by 37 because you know that 6+3 = 2+7 = 8+1 = 9). . . .
Big Powers
Stage: 3 and 4 Challenge Level:
Three people chose this as a favourite problem. It is the sort of problem that needs thinking time - but once the connection is made it gives access to many similar ideas.
Seven Up
Stage: 3 Challenge Level:
The number 27 is special because it is three times the sum of its digits 27 = 3 (2 + 7). Find some two digit numbers that are SEVEN times the sum of their digits (seven-up numbers)?
Cayley
Stage: 3 Challenge Level:
The letters in the following addition sum represent the digits 1 ... 9. If A=3 and D=2, what number is represented by "CAYLEY"?
Balance Power
Stage: 3, 4 and 5 Challenge Level:
Using balancing scales what is the least number of weights needed to weigh all integer masses from 1 to 1000? Placing some of the weights in the same pan as the object how many are needed?
Number Rules - OK
Stage: 4 Challenge Level:
Can you convince me of each of the following: If a square number is multiplied by a square number the product is ALWAYS a square number...
What a Joke
Stage: 4 Challenge Level:
Each letter represents a different positive digit AHHAAH / JOKE = HA What are the values of each of the letters?
Really Mr. Bond
Stage: 4 Challenge Level:
115^2 = (110 x 120) + 25, that is 13225 895^2 = (890 x 900) + 25, that is 801025 Can you explain what is happening and generalise?
Nice or Nasty
Stage: 2 and 3 Challenge Level:
There are nasty versions of this dice game but we'll start with the nice ones...
Enriching Experience
Stage: 4 Challenge Level:
Find the five distinct digits N, R, I, C and H in the following nomogram
Reach 100
Stage: 2 and 3 Challenge Level:
Choose four different digits from 1-9 and put one in each box so that the resulting four two-digit numbers add to a total of 100.
Never Prime
Stage: 4 Challenge Level:
If a two digit number has its digits reversed and the smaller of the two numbers is subtracted from the larger, prove the difference can never be prime.
Cycle It
Stage: 3 Challenge Level:
Carry out cyclic permutations of nine digit numbers containing the digits from 1 to 9 (until you get back to the first number). Prove that whatever number you choose, they will add to the same total.
Stage: 3, 4 and 5
We are used to writing numbers in base ten, using 0, 1, 2, 3, 4, 5, 6, 7, 8, and 9. Eg. 75 means 7 tens and five units. This article explains how numbers can be written in any number base.
Two and Two
Stage: 3 Challenge Level:
How many solutions can you find to this sum? Each of the different letters stands for a different number.
Stage: 2, 3, 4 and 5
This article for the young and old talks about the origins of our number system and the important role zero has to play in it.
Novemberish
Stage: 4 Challenge Level:
a) A four digit number (in base 10) aabb is a perfect square. Discuss ways of systematically finding this number. (b) Prove that 11^{10}-1 is divisible by 100.
Back to Basics
Stage: 4 Challenge Level:
Find b where 3723(base 10) = 123(base b).
2-digit Square
Stage: 4 Challenge Level:
A 2-Digit number is squared. When this 2-digit number is reversed and squared, the difference between the squares is also a square. What is the 2-digit number?
Always a Multiple?
Stage: 3 Challenge Level:
Think of a two digit number, reverse the digits, and add the numbers together. Something special happens...
DOTS Division
Stage: 4 Challenge Level:
Take any pair of two digit numbers x=ab and y=cd where, without loss of generality, ab > cd . Form two 4 digit numbers r=abcd and s=cdab and calculate: {r^2 - s^2} /{x^2 - y^2}.
How Many Miles to Go?
Stage: 3 Challenge Level:
How many more miles must the car travel before the numbers on the milometer and the trip meter contain the same digits in the same order? | 2,243 | 8,734 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.0625 | 4 | CC-MAIN-2017-39 | latest | en | 0.900668 |
http://community.wolfram.com/groups/-/m/t/1229918 | 1,537,819,328,000,000,000 | text/html | crawl-data/CC-MAIN-2018-39/segments/1537267160641.81/warc/CC-MAIN-20180924185233-20180924205633-00500.warc.gz | 52,213,839 | 23,584 | # Improve the accuracy of calculations of a quadratic cosine?
Posted 10 months ago
1075 Views
|
9 Replies
|
7 Total Likes
|
There is a formula Plot[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]] Made calculations with increased accuracy Block[{$MinPrecision = 100000000,$MaxPrecision = 100000000}, Plot[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], {x, N[Pi, 100000000]/2 - 0.0000001, N[Pi, 100000000]/2 + 0.0000001}]] Can someone calculate with more accuracy? I do not understand whether there is a gap there or not.Sorry my English.
9 Replies
Sort By:
Posted 10 months ago
Is this what you're after?: Plot[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], {x, Pi/2 - 0.0000001, Pi/2 + 0.0000001}, WorkingPrecision -> 16] (Note: N[Pi, 100000000]/2 - 0.0000001 is just a long way to compute Pi/2 - 0.0000001, since Mathematica converts N[Pi, 100000000]/2 to machine precision when -0.0000001 is added to it.)
Posted 10 months ago
Plot[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], {x, Pi/2 - 0.0000001, Pi/2 + 0.0000001}, WorkingPrecision -> 16] this does not workI do not understand this line is vertical and there is no gap or there is a slope and there is a gap?
Posted 10 months ago
The default symbolic analysis fails to detect the discontinuity. Compare Plot[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], {x, Pi/2 - 0.0000001, Pi/2 + 0.0000001}, WorkingPrecision -> 16, Exclusions -> ArcCos[Abs[Sin[Abs[x]]]] == 0] with Plot[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], {x, Pi/2 - 0.0000001, Pi/2 + 0.0000001}, WorkingPrecision -> 16, Exclusions -> Flatten@Values@ Solve[ArcCos[Abs[Sin[Abs[x]]]] == 0 && Pi/2 - 1*^-7 < x < Pi/2 + 1*^-7]] There should be a gap.
Posted 10 months ago
Aleksey,There is a discontinuity there: In[20]:= Limit[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], x -> Pi/2, Direction -> "FromAbove"] Out[20]= -1 In[21]:= Limit[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]], x -> Pi/2, Direction -> "FromBelow"] Out[21]= 1 You can evaluate the expression with arbitrary accuracy by specifying the accuracy. Michael is correct that as soon as you add the 0.0000001 to Pi/2 you use machine precision. The syntax is this: ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]] /. x -> Pi/2 + .000000000000000000000000000000001`50 to get -1.0000000000000000000000000000000000000000000000000 (Note: in this case I specified 50 digits of precision. You can go out to whatever you want.)
Posted 10 months ago
just about the limits and forgot. And this proves that at the point Pi / 2 - the vertical line graphic and there is no gap or not? And how to prove it?
Posted 10 months ago
Aleksey,I am not sure exactly what you are asking. The vertical line should not really be on the plot. It is a discontinuity (gap). The function approaches -1 from the positive side and 1 from the negative side. The limit is your proof of a discontinuity -- if the limit is different from each side, the function has a gap or discontinuity at that point. Plot connects all points so it will show the discontinuity as a vertical line.If I am not understanding your question and if your native language is Russian, please post it in Russian and my daughter will translate it for me.Regards,Neil
Posted 10 months ago
Aleksey,I was able to understand the Russian post (before it was removed -- sorry - -I forgot about the rules). Many of the issues you raised are philosophical so I am not qualified to give you an opinion. I am also an engineer and not a pure mathematician. That being said, I think that the limit calculation proves that the function is not continuous and does not connect with a vertical line. The value at Pi/2 is undefined because it depends on the direction from which you approach Pi/2. Michael's plot with the Exclusions is the correct way to handle this plot as far as I know. I believe that you should exclude that point and the plot should not be connected with a line. Again, this is my opinion and you can ask a mathematician who would know more about the issues you raise. I hope this helps. Regards,Neil
This is Not the answer for Yours question only another forumla for Quadratic cosine: HoldForm[ArcSin[Cos[x]]/ArcCos[Abs[Sin[Abs[x]]]] == HeavisideTheta[x] + 2 Sum[(-1)^k*HeavisideTheta[\[Pi]/2 - k \[Pi] + x], {k, 1, Infinity}] == -1 + (-1)^Floor[1/2 + x/\[Pi]] + HeavisideTheta[x]] // TraditionalForm $$\frac{\sin ^{-1}(\cos (x))}{\cos ^{-1}(\left| \sin (\left| x\right| )\right| )}=\theta (x)+2 \sum _{k=1}^{\infty } (-1)^k \theta \left(\frac{\pi }{2}-k \pi +x\right)=-1+(-1)^{\left\lfloor \frac{1}{2}+\frac{x}{\pi }\right\rfloor }+\theta (x)$$Regards,Mariusz | 1,395 | 4,541 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.859375 | 3 | CC-MAIN-2018-39 | latest | en | 0.676633 |
http://www.conversion-website.com/volume/bushel-US-to-peck-UK.html | 1,638,176,342,000,000,000 | text/html | crawl-data/CC-MAIN-2021-49/segments/1637964358702.43/warc/CC-MAIN-20211129074202-20211129104202-00303.warc.gz | 93,359,963 | 4,605 | # Bushels (US) to pecks (UK) (bu to pk)
## Convert bushels (US) to pecks (UK)
Bushels (US) to pecks (UK) converter above calculates how many pecks (UK) are in 'X' bushels (US) (where 'X' is the number of bushels (US) to convert to pecks (UK)). In order to convert a value from bushels (US) to pecks (UK) (from bu to pk) just type the number of bu to be converted to pk and then click on the 'convert' button.
## Bushels (US) to pecks (UK) conversion factor
1 bushel (US) is equal to 3.8757558693295 pecks (UK)
## Bushels (US) to pecks (UK) conversion formula
Volume(pk) = Volume (bu) × 3.8757558693295
Example: Assume there are 308 bushels (US). Shown below are the steps to express them in pecks (UK).
Volume(pk) = 308 ( bu ) × 3.8757558693295 ( pk / bu )
Volume(pk) = 1193.7328077535 pk or
308 bu = 1193.7328077535 pk
308 bushels (US) equals 1193.7328077535 pecks (UK)
## Bushels (US) to pecks (UK) conversion table
bushels (US) (bu)pecks (UK) (pk)
1246.509070431954
1454.260582170613
1662.012093909271
1869.76360564793
2077.515117386589
2285.266629125248
2493.018140863907
26100.76965260257
28108.52116434123
30116.27267607988
32124.02418781854
34131.7756995572
36139.52721129586
38147.27872303452
40155.03023477318
bushels (US) (bu)pecks (UK) (pk)
150581.36338039942
200775.15117386589
250968.93896733237
3001162.7267607988
3501356.5145542653
4001550.3023477318
4501744.0901411983
5001937.8779346647
5502131.6657281312
6002325.4535215977
6502519.2413150642
7002713.0291085306
7502906.8169019971
8003100.6046954636
8503294.39248893
Versions of the bushels (US) to pecks (UK) conversion table. To create a bushels (US) to pecks (UK) conversion table for different values, click on the "Create a customized volume conversion table" button.
## Related volume conversions
Back to bushels (US) to pecks (UK) conversion
TableFormulaFactorConverterTop | 672 | 1,864 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.359375 | 3 | CC-MAIN-2021-49 | latest | en | 0.638684 |
http://math.stackexchange.com/questions/341545/math-question-partial-derivatives-help-please | 1,469,272,393,000,000,000 | text/html | crawl-data/CC-MAIN-2016-30/segments/1469257822172.7/warc/CC-MAIN-20160723071022-00086-ip-10-185-27-174.ec2.internal.warc.gz | 148,277,200 | 15,205 | # Math question partial derivatives help please? [closed]
I have the function $f(x,y)=(x^3)*y - (y^3)*x$.
I have to find $[(df/dx) +(df/dy)]/[(df/dx) *(df/dy)]$.
So,what I dont get is how to find $df/dx$ or $df/dy$..
-
## closed as off-topic by Jonas, Edward Jiang, Luiz Cordeiro, choco_addicted, RamiroMay 4 at 2:35
This question appears to be off-topic. The users who voted to close gave this specific reason:
• "This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – Jonas, Edward Jiang, Luiz Cordeiro, choco_addicted, Ramiro
If this question can be reworded to fit the rules in the help center, please edit the question.
The function $f$ depends on two variables $x$ and $y$ and when you want to derive $f$ relative to $x$ you must treat $y$ as a constant so you have: $$\frac{\partial}{\partial x}f(x,y)=3yx^2-y^3$$ Now I think you can calculate $\frac{\partial}{\partial y}f(x,y)$ by the same method. | 327 | 1,204 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.4375 | 3 | CC-MAIN-2016-30 | latest | en | 0.919965 |
https://www.physicsforums.com/threads/bucklig-deflection-at-euler-load.800053/ | 1,627,330,644,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046152144.92/warc/CC-MAIN-20210726183622-20210726213622-00553.warc.gz | 964,439,333 | 15,757 | # Bucklig: deflection at Euler load
Hello
I was trying to calculate the horizontal deflection of the free end of vertical clamped beam. The beam would be loaded at the free end with a horizontal force $H$ and a vertical force $P$. My idea was to calculate an initial deflection due to the force $H$. Then calculate the additional deflection due to the previous deflection and the force $P$. I'd expect that when the force $P$ is bigger than the Euler buckling load that the deflection would diverge but that's not happening when I try to calculate this.
I tried it on a beam with a length $L$ of 6 m, stiffness $EI = 1476600 Nm^2$ ($E=69 GPa$, $I=2140 *10^4 mm^4$, $H=1 kN$. I calculated that the Euler buckling load $P_{cr}=(Pi)^2EI/(2L)^2=101,20 kN$ and used a way bigger $P=5000 kN$.
For the initial deflection I used $v_0=(1/3EI)HL^3=4,87607 *10^{-5} m$. For the next iteration steps I used $v_i=v_0 + (1/3EI)L^2P*v_{i-1}$ which eventually converges to $v=5,08259 * 10^{-5} m$ instead of diverging.
I know 5000 kN isn't a realistic value and that the beam would probably yield with such a high load but shouldn't this diverge?
Also for loads smaller than the Euler buckling load, is this the right way to calculate the deflection? If not what would be a good way then?
SteamKing
Staff Emeritus
Homework Helper
Hello
I was trying to calculate the horizontal deflection of the free end of vertical clamped beam. The beam would be loaded at the free end with a horizontal force $H$ and a vertical force $P$. My idea was to calculate an initial deflection due to the force $H$. Then calculate the additional deflection due to the previous deflection and the force $P$. I'd expect that when the force $P$ is bigger than the Euler buckling load that the deflection would diverge but that's not happening when I try to calculate this.
I tried it on a beam with a length $L$ of 6 m, stiffness $EI = 1476600 Nm^2$ ($E=69 GPa$, $I=2140 *10^4 mm^4$, $H=1 kN$. I calculated that the Euler buckling load $P_{cr}=(Pi)^2EI/(2L)^2=101,20 kN$ and used a way bigger $P=5000 kN$.
For the initial deflection I used $v_0=(1/3EI)HL^3=4,87607 *10^{-5} m$. For the next iteration steps I used $v_i=v_0 + (1/3EI)L^2P*v_{i-1}$ which eventually converges to $v=5,08259 * 10^{-5} m$ instead of diverging.
I know 5000 kN isn't a realistic value and that the beam would probably yield with such a high load but shouldn't this diverge?
Also for loads smaller than the Euler buckling load, is this the right way to calculate the deflection? If not what would be a good way then?
I'm just taking a quick glance at the situation you are trying to analyze here.
The usual formulas for the Euler buckling load are derived assuming that the only force applied is applied in the axial direction. I think the situation you are describing here is for what is called the buckling of a beam-column, since lateral and axial loads are being applied simultaneously to the tip of the cantilever. The Euler critical load must be modified in this case over that calculated for a simple column with no loads applied in the lateral direction.
There are methods for analyzing such beam-columns, but it will take a little research to confirm what is required.
SteamKing
Staff Emeritus | 886 | 3,252 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.328125 | 3 | CC-MAIN-2021-31 | longest | en | 0.907889 |
https://mathoverflow.net/questions/258586/when-is-a-ring-or-algebra-a-ring-algebra-of-functions | 1,580,136,044,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579251700988.64/warc/CC-MAIN-20200127143516-20200127173516-00516.warc.gz | 544,972,540 | 26,661 | # When is a ring or algebra a ring/algebra of functions?
Note: For the record, exterior algebras and derivations are irrelevant to my question. However, I have a hard time assessing what I want to ask and I find it is the easiest to do so using a direct example. Hence exterior algebras.
When studying derivations of the exterior algebra over a smooth manifold $M$, $\Omega(M)$, one often encounters statements such as
• A derivation is local, eg. for $\omega\in\Omega(M)$, $(D\omega)|_U=D(\omega|_U)$, where $D$ is a derivation of the exterior algebra, and $U$ is an open set of $M$.
• A derivation is algebraic if it vanishes on the grade 0 subspace. An algebraic derivation is ultralocal, in the sense $D\omega|_p$ depends only on $\omega_p$ if $D$ is an algebraic derivation.
Many of these statements reference or use the fact that $\Omega(M)$ is an associative, anticommutative $\mathbb{R}$-, and $C^\infty(M)$-algebra, that is also $\mathbb{N}$-graded, however for example the two I singled out rely on the fact that an element of $\Omega(M)$ is a map $M\rightarrow\wedge T^*M$ (that is also a section).
If I wanted to abstract studying derivations of the exterior algebra into studying derivations of an $R$-algebra ($R$ is a commutative ring) that is associative, anticommutative and $\mathbb{N}$-graded, I could do it, but then statements such as "$D$ is local" or "$D$ is ultralocal" would make no sense, since if $\omega$ is an element of this algebra, things such as $\omega|_U$ or $\omega_p$ don't make sense.
I realize that the exterior algebra has two additional operations in this case, a "restriction", which maps a pair $(U,\omega)$ ($U$ is open and $\omega$ is a differential form on $M$, but it can also be a differential form defined on any region that contains $U$) to $\omega|_U\in\Omega(U)$ and an "evaluation", which maps the pair $(p,\omega)$ (where $p\in M$ and $\omega$ is once again either a form on $M$ or a form defined on a region that contains $U$) to $\omega_p\in\wedge T_p^*M$, but it is even hard for me to come up with a rigorous description of these maps. I'll attempt:
Evaluation is the following map: $$\text{Eval}:M\times\Omega(M)\rightarrow\wedge T^ *M$$ such that $\text{Pr}_1(M\times\Omega(M))=\pi_{\wedge T^*M}\circ\text{Eval}$, where $\text{Pr}_1(M\times\Omega(M))$ is the projection to the first factor.
Let the "space of local differential forms" be $\Omega_{\text{loc}}(M)=\bigsqcup_{U\in\tau_M}\Omega(U)$, so an element of $\Omega_{\text{loc}}(M)$ is a pair $(U,\omega)$ where $U\in\tau_M$ is open and $\omega$ is defined on $U$ only. The restriction is then a map $$\text{Res}:\tau_M\times\Omega(M)\rightarrow\omega_{\text{loc}}(M)$$ such that $\text{Pr}_1(\tau_M\times\Omega(M))=\text{Dom}\circ\text{Res}$ and $\text{Eval}(p,\cdot)\circ\text{Res}=\text{Eval}(p,.)\circ\text{Pr}_2(\tau_M\times\Omega(M))$ where $\text{Eval}$ has been extended to a partial function on $\Omega_{\text{loc}}(M)$ and $p$ is an arbitrary point of $M$ for which "Eval" as a partial function is defined.
I have no idea whether my "Res" and "Eval" functions capture the "locality" of the exterior algebra well, and although no references are made to elements of the exterior algebra being functions, I have referenced $M$, even though for a general $R$-algebra, I'd have no access to $M$.
So basically, my question is, how can I tell if an $R$-algebra/ring I am studying in the abstract can be realized as some kind of algebra of maps? Is there a name for such algebras/rings?
You have an algebra $A$ over a field $k$ and you want to know whether it is isomorphic to an algebra of functions on some set $M$, right? But functions into what? For instance, $A$ is trivially isomorphic to the set of functions from a one-element set into $A$.
If you want it to be isomorphic to an algebra of functions from $M$ into $k$, then I can give a more meaningful answer (though it still may not be what you want). Any algebra $A$ of functions from some set $M$ into $k$ comes with evaluation homomorphisms $\hat{p}: A \to k$ given by $\hat{p}(f) = f(p)$, for all $p \in M$. So a necessary condition is that there is a separating family of homomorphisms from $A$ into $k$. "Separating" means that the intersection of their kernels is $\{0\}$.
In fact this condition is sufficient, too. Given such a family, suppose it is indexed by a set $M$, i.e., the homomorphisms are $\{\phi_p: p \in M\}$. Then define a map $\Gamma: A \to k^M$ (where $k^M$ is the algebra of functions from $M$ into $k$) by $(\Gamma a)(p) = \phi_p(a)$. If the family is separating this will be a monomorphism.
• I did not totally think this through, but I'd say, I want to know when is an $R$-algebra ($R$ is a commutative ring) isomorphic to a $C^ k(M)$-algebra, where $C^k(M)$ is the ring of $C^k$-functions over a $C^k$-manifold $M$, where $k$ might be $0,1,...,\infty,\omega$. But I guess we can also have $M$ be a topological space without it having to be a topological manifold. (I guess isomorphism of $R$ and $C^k(M)$ is also needed for that.) – Bence Racskó Jan 2 '17 at 22:01 | 1,492 | 5,082 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2020-05 | latest | en | 0.910524 |
http://geeksquad.fixya.com/support/t27289991-capacity_frigidaire_washer_gltf1240a | 1,558,549,209,000,000,000 | text/html | crawl-data/CC-MAIN-2019-22/segments/1558232256887.36/warc/CC-MAIN-20190522163302-20190522185302-00482.warc.gz | 83,587,879 | 33,944 | # What is the capacity of Frigidaire washer GLTF1240A
Posted by on
• Level 2:
An expert who has achieved level 2 by getting 100 points
MVP:
An expert that got 5 achievements.
Governor:
An expert whose answer got voted for 20 times.
Scholar:
An expert who has written 20 answers of more than 400 characters.
• Expert
The Frigidaire GLTF1240A clothes washer has a tub capacity of 2.65 cubic feet. Under typical conditions, based on a yearly average of 392 loads of laundry, the Frigidaire GLTF1240A is estimated to use 341 kilowatts of energy per year. Obviously, this number will vary depending on things like the size of loads that you wash, and temperature at which you wash your clothes. Also, this figure is calculated according to a test done by the US Department of Energy which takes into account the energy consumed by the washer as well as the energy needed to heat the water with an electric water heater. If you use a gas water heater you will probably use fewer kilowatt hours but you will consume gas to heat the water.
Its estimated water use is 9,806 of gallons per year. It is estimated to use 9.4 gallons per cycle, per cubic foot of tub volume. You can use this "per cycle, per cubic foot" to compare washers against each other. The lower the number, the more efficiently the washing machine uses water, assuming your clothes are equally clean after being washed in either washing machine.
Posted on Apr 16, 2019
• iain thomson Apr 16, 2019
The Frigidaire GLTF1240A clothes washer has a tub capacity of 2.65 cubic feet. Under typical conditions, based on a yearly average of 392 loads of laundry, the Frigidaire GLTF1240A is estimated to use 341 kilowatts of energy per year. Obviously, this number will vary depending on things like the size of loads that you wash, and temperature at which you wash your clothes. Also, this figure is calculated according to a test done by the US Department of Energy which takes into account the energy consumed by the washer as well as the energy needed to heat the water with an electric water heater. If you use a gas water heater you will probably use fewer kilowatt hours but you will consume gas to heat the water.
Its estimated water use is 9,806 of gallons per year. It is estimated to use 9.4 gallons per cycle, per cubic foot of tub volume. You can use this "per cycle, per cubic foot" to compare washers against each other. The lower the number, the more efficiently the washing machine uses water, assuming your clothes are equally clean after being washed in either washing machine.
• adrian Brody May 08, 2019
Customer helpline phone number is +1 8OO 648 162O
×
• Level 1:
An expert who has achieved level 1.
Corporal:
An expert that has over 10 points.
Problem Solver:
An expert who has answered 5 questions.
• Contributor
Conatct customer help,line phone number is +1 8OO 648 162O ,,.....
Posted on May 08, 2019
SOURCE:
Hi there,
Save hours of searching online or wasting money on unnecessary repairs by talking to a 6YA Expert who can help you resolve this issue over the phone in a minute or two.
Best thing about this new service is that you are never placed on hold and get to talk to real repairmen in the US.
Here's a link to this great service
Good luck!
Posted on Jan 02, 2017
It's in the model number, and it's 4.4 cft. Which is one of the largest capacity washers available.
Posted on May 27, 2009
×
my-video-file.mp4
×
## Related Questions:
### The fabric softner tray isn't draining
Hi,
Here is a tip that I wrote about what I found out about the fabric softener not dispensing into the washer...
Fridgidaire Washing Machine Fabric Softner not going into washer
heatman101
Feb 21, 2011 | Frigidaire GLTF1240A Front Load Washer
### LOOSE DRUM, BANGING
Hi,
When my washer did that, I found that the shocks that hold the tub were bad...
Check out this tip I wrote about that...
http://www.fixya.com/support/r3663122-washer_problems_washer_noise_when
heatman101
Apr 25, 2010 | Frigidaire GLTF1240A Front Load Washer
### Water under washing machine
well u definitly have a leak may be just a hose or a pump problem u need to flip it over to check where the leak is then post again
Oct 28, 2009 | Frigidaire GLTF1240A Front Load Washer
### Washer will not go into fast spin cycle
SOUNDS LIKE YOU MAY HAVE TTO REPLACE YOUR BELT ON YOUR WASHER. I AM NOT A MECHANIC, BUT I THINK THIS WILL HELP.
May 13, 2009 | Frigidaire Washing Machines
### Water leak from front load washer - Frigidare GLTF1240A
just remove bolts, open bit enough to fill with silicone, close back wait to dry, ready to go for couple more years.
Apr 25, 2009 | Frigidaire GLTF1240A Front Load Washer
### Blinking lights and failure to start
Suggestion: 1 Press and hold down 'spin' & 'dry' buttons to unlock controls. 2 Have u overloaded the machine with clothes? Reduce load. 3 Try 'spin' only after putting the clothes. Then put off machine & continue with main cycle. Three suggestions here........ What is the clothes capacity of the machine; try to limit within capacity. Try the "spin only" function aft putting the clothes. Then put the machine off & continue with the main cycle.
Feb 07, 2007 | Frigidaire GLTF1240A Front Load Washer
## Open Questions:
#### Related Topics:
232 people viewed this question
Level 3 Expert
Level 2 Expert | 1,331 | 5,353 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.671875 | 3 | CC-MAIN-2019-22 | latest | en | 0.952602 |
http://journal.seu.edu.cn/oa/DArticle.aspx?type=view&id=200605013 | 1,603,318,943,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107878633.8/warc/CC-MAIN-20201021205955-20201021235955-00658.warc.gz | 53,800,763 | 11,487 | # [1]曾庆化,刘建业,赵伟,等.激光陀螺惯导系统硬件增强角速率输入圆锥算法[J].东南大学学报(自然科学版),2006,36(5):746-750.[doi:10.3969/j.issn.1001-0505.2006.05.013] Zeng Qinghua,Liu Jianye,Zhao Wei,et al.Coning algorithm with hardware-enhanced angle rate of RLG SINS[J].Journal of Southeast University (Natural Science Edition),2006,36(5):746-750.[doi:10.3969/j.issn.1001-0505.2006.05.013] 点击复制 激光陀螺惯导系统硬件增强角速率输入圆锥算法() 分享到: var jiathis_config = { data_track_clickback: true };
36
2006年第5期
746-750
2006-09-20
## 文章信息/Info
Title:
Coning algorithm with hardware-enhanced angle rate of RLG SINS
Author(s):
College of Automation Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Keywords:
V448
DOI:
10.3969/j.issn.1001-0505.2006.05.013
Abstract:
A hardware-enhanced angle rate algorithm is put forward. It is a new coning algorithm with ring laser gyro(RLG)angle rate and hardware-integrated angle increment according to the operation principle of mechanically dithered RLG angle rate and angle increment. Associated with the analysis of the new algorithm in pure coning environment, general expression of rotating vector in the new algorithm is given, and the basic and optimized algorithm of the new coning compensation algorithm are deduced. The analysis of residual error coefficients, cross-product coefficients’ regularity and the further simulation result indicate that the hardware-enhanced angle rate algorithm has higher precision and can be a new idea to improve the attitude accuracy of the strapdown inertial navigation system.
## 参考文献/References:
[1] Miller R B.A new strap-down attitude algorithm[J].Journal of Guidance,Control,and Dynamics,1983,6(4):287-291.
[2] Lee J G,Yoon Y J,Mark J G.Extension of strap-down attitude algorithm for high-frequency base motion[J]. Journal of Guidance,Control,and Dynamics,1990,13(4):738-743.
[3] 林雪原,刘建业,赵伟.一种改进的旋转矢量姿态算法[J].东南大学学报:自然科学版,2003,33(2):182-185.
Lin Xueyuan,Liu Jianye,Zhao Wei.Improved rotation vector attitude algorithm[J].Journal of Southeast University:Natural Science Edition,2003,33(2):182-185.(in Chinese)
[4] Ignagni M B.Efficient class of optimized coning compensation algorithms [J].Journal of Guidance,Control and Dynamics,1996,19(2):424-429.
[5] Park Chan Gook,Kim Kwang Jin,Lee Jang Gyu,et al.Formalized approach to obtaining optimal coefficients for coning algorithms [J].Journal of Guidance,Control and Dynamics,1999,22(1):165-168.
[6] 刘危,解旭辉,李圣怡.捷联惯性导航系统的姿态算法[J].北京航空航天大学学报,2005,31(1):54-50.
Liu Wei,Xie Xuhui,Li Shengyi.Attitude algorithm for strap-down inertial navigation system[J].Journal of Beijing University of Aeronautics and Astronautics,2005,31(1):54-50.(in Chinese)
[7] Savage Paul G.Strapdown inertial navigation integration algorithm design,part 1:attitude algorithms[J]. Journal of Guidance,Control,and Dynamics,1998,21(1):19-28.
[8] Mark J G,Tazartes D A.Tuning of coning algorithms to gyro date frequency response characteristics[J].Journal of Guidance,Control,and Dynamics,2001,24(4):641-647.
[9] 黄昊,邓正隆.旋转矢量航姿算法的一种新的表达[J].宇航学报,2001,22(3):92-98.
Huang Hao,Deng Zhenglong.A new expression for rotation vector attitude algorithm[J]. Journal of Astronautics,2001,22(3):92-98.(in Chinese)
[10] Kelly M.Roscoe equivalency between strap-down inertial navigation coning and sculling integrals algorithms[J]. Journal of Guidance,Control and Dynamics,2001,24(2):201-205.
## 相似文献/References:
[1]林雪原,刘建业,赵伟.一种改进的旋转矢量姿态算法[J].东南大学学报(自然科学版),2003,33(2):182.[doi:10.3969/j.issn.1001-0505.2003.02.016]
Lin Xueyuan,Liu Jianye,Zhao Wei.Improved rotation vector attitude algorithm[J].Journal of Southeast University (Natural Science Edition),2003,33(5):182.[doi:10.3969/j.issn.1001-0505.2003.02.016] | 1,188 | 3,676 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2020-45 | latest | en | 0.650799 |
https://askfilo.com/math-question-answers/anand-and-bimal-are-first-semester-b-tech-students-of-iit-in-city-x-they-had | 1,726,025,548,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651343.80/warc/CC-MAIN-20240911020451-20240911050451-00190.warc.gz | 87,083,097 | 34,802 | Question
Hard
Solving time: 4 mins
# Anand and Bimal are first semester B Tech students of IIT in city X. They had their schooling in two different schools located in cities and . Anand has friends in city . (who studied with him in the same school). Bimal has friends in city Z (who studied in the same school with him). In the IIT where they study now, they have common friends. Altogether, there are 12 who are friends of Anand or of Bimal. One day Anand and Bimal arranged a party for their friends and organized some games. Assume that no game is played between two friends who have studied/studying in the same institution. The maximum number of games that could be played, is
A
23
B
66
C
46
D
45
## Text solutionVerified
Let be the number of friends of Anand and Bimal who are studying in the IIT. Given that Total number of games played between any two friends Number of games played between two friends who studied/studying in the same institution is Number of games played if no game is played between friends of same institution From and Also for to be defined
96
Share
Report
Found 6 tutors discussing this question
Discuss this question LIVE
12 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Trusted by 4 million+ students
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text Anand and Bimal are first semester B Tech students of IIT in city X. They had their schooling in two different schools located in cities and . Anand has friends in city . (who studied with him in the same school). Bimal has friends in city Z (who studied in the same school with him). In the IIT where they study now, they have common friends. Altogether, there are 12 who are friends of Anand or of Bimal. One day Anand and Bimal arranged a party for their friends and organized some games. Assume that no game is played between two friends who have studied/studying in the same institution. The maximum number of games that could be played, is Topic Permutations and Combinations Subject Mathematics Class Class 11 Answer Type Text solution:1 Upvotes 96 | 559 | 2,458 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.171875 | 3 | CC-MAIN-2024-38 | latest | en | 0.975734 |
http://edu.epfl.ch/coursebook/en/numerical-analysis-MATH-251-A | 1,560,702,914,000,000,000 | text/html | crawl-data/CC-MAIN-2019-26/segments/1560627998288.34/warc/CC-MAIN-20190616162745-20190616184745-00312.warc.gz | 51,000,338 | 9,443 | # Coursebooks
## Numerical analysis
#### Lecturer(s) :
Antolin Sanchez Pablo
English
#### Summary
This course offers an introduction to numerical methods for the solution of mathematical problems as: solution of systems of linear and non-linear equations, functions approximation, integration and differentiation and solution of differential equations.
#### Content
• Iterative methods for solving non-linear equations.
• Polynomial approximation: interpolation and least square methods.
• Numerical integration and differentiation.
• Solution of systems of linear equations: direct and iterative methods.
• Numerical approximation of differential equations.
• Introduction to MATLAB/OCTAVE software.
#### Keywords
Numerical algorithms; polynomial interpolation; numerical integration; numerical linear algebra; numerical solution of ODEs; iterative methods.
#### Learning Prerequisites
##### Required courses
Analyse, Algèbre linéaire
##### Important concepts to start the course
Analysis, linear algebra and programming.
#### Learning Outcomes
By the end of the course, the student must be able to:
• Choose a numerical method for solving a specific problem.
• Interpret obtained numerical results from a theoretical perspective.
• Estimate numerical errors.
• Prove theoretical properties of numerical methods.
• Implement numerical algorithms.
• Apply numerical algorithms to specific problems.
• Describe numerical methods.
• State theoretical properties of mathematical problems and numerical methods.
#### Transversal skills
• Use a work methodology appropriate to the task.
• Use both general and domain specific IT resources and tools
• Access and evaluate appropriate sources of information.
#### Teaching methods
Ex cathedra lectures; exercises in class and with computer using MATLAB/OCTAVE sofware.
#### Expected student activities
• Class attendance.
• Solution of exercises.
• Solution of problems using MATLAB/OCTAVE sofware.
#### Assessment methods
The exam may require to use a computer and MATLAB/OCTAVE software.
#### Supervision
Office hours Yes Assistants Yes Forum No
#### Resources
Yes
##### Bibliography
In English:
• Lecturer notes.
• A. Quarteroni et F. Saleri et P. Gervasio: « Scientific Computing with MATLAB and OCTAVE », Springer, 2014, ISBN 978-3-642-45367-0.
• A. Quarteroni, R. Sacco et F. Saleri : « Numerical Mathematics », Springer, 2007, ISBN 978-3-540-49809-4.
In French:
• Lecture notes.
• A. Quarteroni, P. Gervasio et F. Saleri : « Calcul Scientifique : Cours, exercices corrigés et illustrations en MATLAB et OCTAVE », Springer, 2010, ISBN 978-88-470-1676-7.
• A. Quarteroni, R. Sacco et F. Saleri : « Méthodes Numériques - Algorithmes, analyse et applications », Springer, 2007, ISBN 978-88-470-0495-5.
• J. Rappaz et M. Picasso: "Introduction à l'analyse numérique", PPUR - Collection: Enseignement des mathématiques - 2em édition - 2011
##### Notes/Handbook
Lecture notes will be provided.
### In the programs
• Semester
Fall
• Exam form
Written
• Credits
3
• Subject examined
Numerical analysis
• Lecture
2 Hour(s) per week x 14 weeks
• Exercises
1 Hour(s) per week x 14 weeks
• Passerelle HES - SIE, 2018-2019, Autumn semester
• Semester
Fall
• Exam form
Written
• Credits
3
• Subject examined
Numerical analysis
• Lecture
2 Hour(s) per week x 14 weeks
• Exercises
1 Hour(s) per week x 14 weeks
• Semester
Fall
• Exam form
Written
• Credits
3
• Subject examined
Numerical analysis
• Lecture
2 Hour(s) per week x 14 weeks
• Exercises
1 Hour(s) per week x 14 weeks
### Reference week
MoTuWeThFr
8-9
9-10
10-11
11-12
12-13
13-14
14-15
15-16
16-17
17-18
18-19
19-20
20-21
21-22
Under construction
Lecture
Exercise, TP
Project, other
### legend
• Autumn semester
• Winter sessions
• Spring semester
• Summer sessions
• Lecture in French
• Lecture in English
• Lecture in German | 964 | 3,865 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.96875 | 3 | CC-MAIN-2019-26 | latest | en | 0.810045 |
https://www.leonieclaire.com/the-best-writing-tips/who-solved-the-two-unsolvable-math-problems/ | 1,660,839,194,000,000,000 | text/html | crawl-data/CC-MAIN-2022-33/segments/1659882573242.55/warc/CC-MAIN-20220818154820-20220818184820-00682.warc.gz | 726,296,751 | 13,497 | ## Who Solved the two unsolvable math problems?
Dantzig
In statistics, Dantzig solved two open problems in statistical theory, which he had mistaken for homework after arriving late to a lecture by Jerzy Neyman….
George Dantzig
Dantzig with President Gerald Ford in 1976
Born George Bernard DantzigNovember 8, 1914 Portland, Oregon, US
Is there a math problem that Cannot be solved?
The Collatz conjecture is one of the most famous unsolved mathematical problems, because it’s so simple, you can explain it to a primary-school-aged kid, and they’ll probably be intrigued enough to try and find the answer for themselves. So here’s how it goes: pick a number, any number. If it’s even, divide it by 2.
What are the unsolvable math problems?
These Are the 10 Toughest Math Problems Ever Solved
• The Collatz Conjecture. Dave Linkletter.
• Goldbach’s Conjecture Creative Commons.
• The Twin Prime Conjecture.
• The Riemann Hypothesis.
• The Birch and Swinnerton-Dyer Conjecture.
• The Kissing Number Problem.
• The Unknotting Problem.
• The Large Cardinal Project.
### Who Solved the impossible equation?
Mathematics professor Andrew Wiles has won a prize for solving Fermat’s Last Theorem. He’s seen here with the problem written on a chalkboard in his Princeton, N.J., office, back in 1998. The mathematics problem he solved had been lingering since 1637 — and he first read about it when he was just 10 years old.
Who was the real Will Hunting?
Try the story of Evariste Galois. Born in 1811, he set down the foundations of mathematical group theory before he got himself killed at the age of twenty. Galois was a perfect real-life model for the fictional Will Hunting. William James Sidis, born in 1898, could read at 18 months.
Who is known as father of linear programming?
His algorithm is called the simplex method. Dantzig is known throughout the world as the father of linear programming. He received countless honors and awards in his life, including the National Medal of Science. But he was passed over by the Nobel Prize committee, even though linear programming was not.
## What is the IQ of Will Hunting?
Will Hunting = Will Sidis Sidis (with his adulthood estimated IQ=250-300) Could clear MIT entrance at age 8; Graduated from harvard at age 16, Entered law school age 16.
How much of Good Will Hunting is true?
Broadly speaking, Good Will Hunting isn’t based on a true story. But Damon did incorporate aspects of his personal life into the script. For example, Skylar (Minnie Driver), Will Hunting’s love interest, was based on Damon’s then-girlfriend, medical student Skylar Satenstein.
What is the hardest thing in math?
The easiest would be Contemporary Mathematics. This is usually a survey class taken by students not majoring in any science. The hardest is usually thought to be Calculus I. This is the full on, trigonometry based calculus course intended for science and engineering majors.
### What are some unsolved problems in mathematics?
There are many unsolved problems in mathematics. Some prominent outstanding unsolved problems (as well as some which are not necessarily so well known) include. 1. The Goldbach conjecture. 2. The Riemann hypothesis.
What is the hardest type of math?
One of the hardest classes is Math 55. Math 55 is often regarded as being the hardest undergraduate math class in America. It is a 1 year long course consisting of Math 55a and Math 55b typically taken by freshmen. The course is highly abstract pure math.
What is the most difficult mathematics?
This Is The Hardest Math Problem In The World Goldbach Conjecture. Let’s start our list with an extremely famous and easy-to-understand problem. Inscribed Square Problem. Take a pencil and draw a closed curve. Continuum Hypothesis. Modern math has infinities all over the place. Collatz Conjecture. First, pick any positive number n. Solving Chess. The Riemann Hypothesis. | 892 | 3,909 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.0625 | 3 | CC-MAIN-2022-33 | latest | en | 0.947668 |
https://nl.mathworks.com/matlabcentral/cody/problems/18-bullseye-matrix/solutions/1607325 | 1,585,831,821,000,000,000 | text/html | crawl-data/CC-MAIN-2020-16/segments/1585370506959.34/warc/CC-MAIN-20200402111815-20200402141815-00051.warc.gz | 597,838,803 | 15,704 | Cody
# Problem 18. Bullseye Matrix
Solution 1607325
Submitted on 12 Aug 2018 by André Nóbrega
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
n = 5; a = [3 3 3 3 3; 3 2 2 2 3; 3 2 1 2 3; 3 2 2 2 3; 3 3 3 3 3]; assert(isequal(bullseye(n),a));
x = 3 a = 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 a = 3 3 3 3 3 3 2 2 2 3 3 2 2 2 3 3 2 2 2 3 3 3 3 3 3 a = 3 3 3 3 3 3 2 2 2 3 3 2 1 2 3 3 2 2 2 3 3 3 3 3 3 a = 3 3 3 3 3 3 2 2 2 3 3 2 1 2 3 3 2 2 2 3 3 3 3 3 3
2 Pass
n = 7; a = [4 4 4 4 4 4 4; 4 3 3 3 3 3 4; 4 3 2 2 2 3 4; 4 3 2 1 2 3 4; 4 3 2 2 2 3 4; 4 3 3 3 3 3 4; 4 4 4 4 4 4 4]; assert(isequal(bullseye(n),a))
x = 4 a = 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 a = 4 4 4 4 4 4 4 4 3 3 3 3 3 4 4 3 3 3 3 3 4 4 3 3 3 3 3 4 4 3 3 3 3 3 4 4 3 3 3 3 3 4 4 4 4 4 4 4 4 a = 4 4 4 4 4 4 4 4 3 3 3 3 3 4 4 3 2 2 2 3 4 4 3 2 2 2 3 4 4 3 2 2 2 3 4 4 3 3 3 3 3 4 4 4 4 4 4 4 4 a = 4 4 4 4 4 4 4 4 3 3 3 3 3 4 4 3 2 2 2 3 4 4 3 2 1 2 3 4 4 3 2 2 2 3 4 4 3 3 3 3 3 4 4 4 4 4 4 4 4 a = 4 4 4 4 4 4 4 4 3 3 3 3 3 4 4 3 2 2 2 3 4 4 3 2 1 2 3 4 4 3 2 2 2 3 4 4 3 3 3 3 3 4 4 4 4 4 4 4 4 | 991 | 1,263 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.46875 | 3 | CC-MAIN-2020-16 | latest | en | 0.297143 |
https://metanumbers.com/100497199 | 1,643,174,200,000,000,000 | text/html | crawl-data/CC-MAIN-2022-05/segments/1642320304915.53/warc/CC-MAIN-20220126041016-20220126071016-00153.warc.gz | 443,657,093 | 7,485 | # 100497199 (number)
100,497,199 (one hundred million four hundred ninety-seven thousand one hundred ninety-nine) is an odd nine-digits composite number following 100497198 and preceding 100497200. In scientific notation, it is written as 1.00497199 × 108. The sum of its digits is 40. It has a total of 2 prime factors and 4 positive divisors. There are 91,361,080 positive integers (up to 100497199) that are relatively prime to 100497199.
## Basic properties
• Is Prime? No
• Number parity Odd
• Number length 9
• Sum of Digits 40
• Digital Root 4
## Name
Short name 100 million 497 thousand 199 one hundred million four hundred ninety-seven thousand one hundred ninety-nine
## Notation
Scientific notation 1.00497199 × 108 100.497199 × 106
## Prime Factorization of 100497199
Prime Factorization 11 × 9136109
Composite number
Distinct Factors Total Factors Radical ω(n) 2 Total number of distinct prime factors Ω(n) 2 Total number of prime factors rad(n) 100497199 Product of the distinct prime numbers λ(n) 1 Returns the parity of Ω(n), such that λ(n) = (-1)Ω(n) μ(n) 1 Returns: 1, if n has an even number of prime factors (and is square free) −1, if n has an odd number of prime factors (and is square free) 0, if n has a squared prime factor Λ(n) 0 Returns log(p) if n is a power pk of any prime p (for any k >= 1), else returns 0
The prime factorization of 100,497,199 is 11 × 9136109. Since it has a total of 2 prime factors, 100,497,199 is a composite number.
## Divisors of 100497199
4 divisors
Even divisors 0 4 2 2
Total Divisors Sum of Divisors Aliquot Sum τ(n) 4 Total number of the positive divisors of n σ(n) 1.09633e+08 Sum of all the positive divisors of n s(n) 9.13612e+06 Sum of the proper positive divisors of n A(n) 2.74083e+07 Returns the sum of divisors (σ(n)) divided by the total number of divisors (τ(n)) G(n) 10024.8 Returns the nth root of the product of n divisors H(n) 3.66667 Returns the total number of divisors (τ(n)) divided by the sum of the reciprocal of each divisors
The number 100,497,199 can be divided by 4 positive divisors (out of which 0 are even, and 4 are odd). The sum of these divisors (counting 100,497,199) is 109,633,320, the average is 27,408,330.
## Other Arithmetic Functions (n = 100497199)
1 φ(n) n
Euler Totient Carmichael Lambda Prime Pi φ(n) 91361080 Total number of positive integers not greater than n that are coprime to n λ(n) 45680540 Smallest positive number such that aλ(n) ≡ 1 (mod n) for all a coprime to n π(n) ≈ 5781074 Total number of primes less than or equal to n r2(n) 0 The number of ways n can be represented as the sum of 2 squares
There are 91,361,080 positive integers (less than 100,497,199) that are coprime with 100,497,199. And there are approximately 5,781,074 prime numbers less than or equal to 100,497,199.
## Divisibility of 100497199
m n mod m 2 3 4 5 6 7 8 9 1 1 3 4 1 5 7 4
100,497,199 is not divisible by any number less than or equal to 9.
## Classification of 100497199
• Arithmetic
• Semiprime
• Deficient
• Polite
• Square Free
### Other numbers
• LucasCarmichael
## Base conversion (100497199)
Base System Value
2 Binary 101111111010111011100101111
3 Ternary 21000002210020111
4 Quaternary 11333113130233
5 Quinary 201211402244
6 Senary 13550000451
8 Octal 577273457
10 Decimal 100497199
12 Duodecimal 297a6127
20 Vigesimal 1b822jj
36 Base36 1nu04v
## Basic calculations (n = 100497199)
### Multiplication
n×y
n×2 200994398 301491597 401988796 502485995
### Division
n÷y
n÷2 5.02486e+07 3.34991e+07 2.51243e+07 2.00994e+07
### Exponentiation
ny
n2 10099687006845601 1014990254964676725971599 102003677636245854900636253051201 10251083890141649292874366749500904085999
### Nth Root
y√n
2√n 10024.8 464.927 100.124 39.8502
## 100497199 as geometric shapes
### Circle
Diameter 2.00994e+08 6.31443e+08 3.17291e+16
### Sphere
Volume 4.25158e+24 1.26916e+17 6.31443e+08
### Square
Length = n
Perimeter 4.01989e+08 1.00997e+16 1.42125e+08
### Cube
Length = n
Surface area 6.05981e+16 1.01499e+24 1.74066e+08
### Equilateral Triangle
Length = n
Perimeter 3.01492e+08 4.37329e+15 8.70331e+07
### Triangular Pyramid
Length = n
Surface area 1.74932e+16 1.19618e+23 8.20556e+07
## Cryptographic Hash Functions
md5 19e68e2c38cdb486404f0469e9eccf6a f73d75226a9f984bb50b31c5683a72f817277ce4 2107fba52a0e626e50af5aed14a8b78399f6ab094cb215795265c6b2f899ac39 161f008644852575894d22e6ab7c1b28a1941f1f39cfe4939af07a8275ae028b79b632320852e5b5942c8a82de6fba8257aa9c4d4974b524b2919d967d4af724 599e7a62f64439d726525c0d6484d6a484096c94 | 1,613 | 4,574 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.359375 | 3 | CC-MAIN-2022-05 | latest | en | 0.759014 |
https://www.airmilescalculator.com/distance/yev-to-ysy/ | 1,725,967,296,000,000,000 | text/html | crawl-data/CC-MAIN-2024-38/segments/1725700651241.17/warc/CC-MAIN-20240910093422-20240910123422-00352.warc.gz | 615,558,551 | 17,194 | # How far is Sachs Harbour from Inuvik?
The distance between Inuvik (Inuvik (Mike Zubko) Airport) and Sachs Harbour (Sachs Harbour (David Nasogaluak Jr. Saaryuaq) Airport) is 321 miles / 516 kilometers / 279 nautical miles.
The driving distance from Inuvik (YEV) to Sachs Harbour (YSY) is 103 miles / 165 kilometers, and travel time by car is about 4 hours 8 minutes.
321
Miles
516
Kilometers
279
Nautical miles
1 h 6 min
## Distance from Inuvik to Sachs Harbour
There are several ways to calculate the distance from Inuvik to Sachs Harbour. Here are two standard methods:
Vincenty's formula (applied above)
• 320.525 miles
• 515.834 kilometers
• 278.528 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth's surface using an ellipsoidal model of the planet.
Haversine formula
• 319.376 miles
• 513.985 kilometers
• 277.530 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
## How long does it take to fly from Inuvik to Sachs Harbour?
The estimated flight time from Inuvik (Mike Zubko) Airport to Sachs Harbour (David Nasogaluak Jr. Saaryuaq) Airport is 1 hour and 6 minutes.
## Flight carbon footprint between Inuvik (Mike Zubko) Airport (YEV) and Sachs Harbour (David Nasogaluak Jr. Saaryuaq) Airport (YSY)
On average, flying from Inuvik to Sachs Harbour generates about 72 kg of CO2 per passenger, and 72 kilograms equals 159 pounds (lbs). The figures are estimates and include only the CO2 generated by burning jet fuel.
## Map of flight path and driving directions from Inuvik to Sachs Harbour
See the map of the shortest flight path between Inuvik (Mike Zubko) Airport (YEV) and Sachs Harbour (David Nasogaluak Jr. Saaryuaq) Airport (YSY).
## Airport information
Origin Inuvik (Mike Zubko) Airport
City: Inuvik | 503 | 1,927 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.15625 | 3 | CC-MAIN-2024-38 | latest | en | 0.803082 |
https://www.hackmath.net/en/math-problem/7950 | 1,603,502,156,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107881551.11/warc/CC-MAIN-20201023234043-20201024024043-00092.warc.gz | 736,038,639 | 11,883 | # Banknotes
Eva deposit 7800 USD in 50 banknotes in the bank.
They had value 100 USD and 200 USD.
How many were they?
Correct result:
a = 22
b = 28
#### Solution:
a+b = 50
100a+200b = 7800
a+b = 50
100•a+200•b = 7800
a+b = 50
100a+200b = 7800
a = 22
b = 28
Our linear equations calculator calculates it.
We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you!
Tips to related online calculators
Do you have a linear equation or system of equations and looking for its solution? Or do you have quadratic equation?
## Next similar math problems:
• Three workers
Three workers were rewarded CZK 9200 and the money divided by the work they have done. First worker to get twice than the second, the second three times more than the third. How much money each worker received?
• Shoes and slippers
Shoes were three times as many as slippers. If shoes were cheaper by 120, it would be twice as expensive as slippers. How much cost shoes and how much slippers?
• The gardener
The gardener bought trees for 960 CZK. If every tree were cheaper by 12 CZK, he would have gotten four more trees for the same money. How many trees did he buy?
• Profitable bank deposit 2012
Calculate the value of what money lose creditor with a deposit € 9500 for 4 years if the entire duration are interest 2.6% p.a. and tax on interest is 19% and annual inflation is 3.7% (Calculate what you will lose if you leave money lying idle at negative
• The professor's birthday
Professor of mathematics had 57 birthdays. The director congratulated him. The professor asked the director: "And how old are you?" The director replied: "I'm exactly twice as many years than you were when I was old as to you today." How old is the direct
• Cranberries
They brought 65 kg of dried cranberries to the packing house. They made 150-gram and 200-gram packages from them. How many kilograms of cranberries did they use for the smaller 200 packages? How many 200 gram packages were there?
• Required reserves
Calculate what is the minimum amount of money that bank must hold in cash from your deposit 2750 Eur. How much money is ideally created in the banking system from your deposit if the level of minimum reserve requirement is 2.15%? Consider fractional reser
• Store
One meter of the textile was discounted by 2 USD. Now 9 m of textile cost as before 8 m. Calculate the old and new price of 1 m of the textile.
• MO Z8-I-1 2018
Fero and David meet daily in the elevator. One morning they found that if they multiply their current age, they get 238. If they did the same after four years, this product would be 378. Determine the sum of the current ages of Fero and David.
• Coins
The boy collected coins with value 5 CZK and 2 CZK when he had 50 pieces saved 190 CZK. How many he has each type of coin?
• Lottery - eurocents
Tereza bets in the lottery and finally wins. She went to the booth to have the prize paid out. An elderly gentleman standing next to him wants to buy a newspaper, but he is missing five cents. Tereza is in a generous mood after the win, so she gives the m
• Sugar - cuboid
Pejko received from his master cuboid composed of identical sugar cubes with count between 1000 and 2000. The Pejko eat sugar cubes in layers. The first day eat one layer from the front, second day one layer from right, the third day one layer above. Yet
• Friends
Some friends had to collect the sum 72 EUR equally. If the three refused their part, others would have to give each 4 euros more. How many are friends?
• Saving
The boy has saved 50 coins € 5 and € 2. He saved € 190. How many were € 5 and how many € 2?
• Siblings
Three siblings had saved up a total of 1,274 CZK. Peter had saved up to 15% more than Jirka and Hanka 10% less than Peter. How much money they saved each one of them?
• Sow barley
Farmers wanted to sow barley within 13 days. Due to the excellent weather, they managed to exceed the daily plan of sowing by 2 ha and therefore finished sow grain in 12 days. How many hectares of land did they sow with barley?
• Motion
From two different locations distant 232 km started against car and bus. The car started at 5:20 with average speed 64 km/h. Bus started at 7:30 with average speed 80 km/h. When they meet? How many kilometers went the bus? | 1,105 | 4,312 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4 | 4 | CC-MAIN-2020-45 | longest | en | 0.983058 |
http://cbsport.org/algebraic-graph.html | 1,560,892,659,000,000,000 | text/html | crawl-data/CC-MAIN-2019-26/segments/1560627998817.58/warc/CC-MAIN-20190618203528-20190618225528-00059.warc.gz | 32,460,147 | 20,106 | # Algebraic Graph
algebraic graphs s cool the revision website.
algebraic graphs s cool the revision website.
algebraic graphs s cool the revision website.
algebraic graphs s cool the revision website.
graphing equations using algebra calculator mathpapa.
algebraic graphs s cool the revision website.
10 basic algebraic graphs dummies.
algebraic graphs smileypotatoes.
algebraic graph theory wikipedia.
algebraic graph theory wikipedia.
solve systems of equations by graphing pre algebra graphing and.
eight basic algebraic curves dummies.
ixl graph a horizontal or vertical line algebra 1 practice.
graphing linear equations wyzant resources.
math help algebra quadratics theory graphs.
find domain and range from graphs college algebra.
algebraic graphs in canvas sitepoint.
advanced graphing cool math algebra help lessons graphs to know.
college algebra problems with answers sample 4 graphs of functions.
graphing equations and inequalities graphing linear equations.
algebra graphing equations inequalities unit quiz.
beginning algebra graphing linear equations youtube.
graphing polynomials cool math algebra help lessons relative.
linear graphs worksheets ks3 gcse by newmrsc teaching resources.
linear algebra algebraic graph theory identities mathematics.
determining the equation of a line from a graph worksheet wyzant.
gcse foundation algebraic graphs and their equations unit 8.
algebra graphing.
representing functions as rules and graphs algebra 1 discovering.
graphing linear inequalities.
algebraic graph theory cambridge mathematical library norman.
table 1 from a flattening approach for attributed type graphs with.
install kalgebra algebraic graphing calculator into linux mint.
algebraic graph theory png and algebraic graph theory transparent.
algebraic graph theorey and fuzzy graph of algebraic structure m.
algebraic graph theoric measures of conflict institute west.
pattern vectors from algebraic graph theory white rose.
figure 5 from graph polynomials and graph transformations in.
system description for formation control based on algebraic graph.
pdf algebraic graph theory by biggs m saravanan academia edu.
how to graph linear equations 5 steps with pictures wikihow.
the graph a linear equation in slope intercept form a math.
algebraic graph transformations for merging ontologies.
analysis and correctness of algebraic graph and model.
algebraic graph theory png and algebraic graph theory transparent.
lecture 10 introduction to algebraic graph theory eigenvalues and.
pdf a study on eulerian hamiltonian and complete algebraic graphs.
06050011 quantum computing and algebraic graph theory.
graphing linear inequalities.
algebraic graph transformations for merging ontologies.
pims mitacs summer school in algebraic graph theory event type.
symmetry vs regularity.
algebra algebraic graph graph axis graphing mathematical graph icon.
slope intercept form.
distance transitive graph wikipedia.
algebraic graph theory morphisms monoids and matrices de gruyter.
algebraic graph theory 1st edition 9783110254082 vitalsource.
drawing graphs of algebraic equations by dannytheref teaching.
workshop on algebraic graph theory.
chris godsil gordon royle algebraic graph theory graph theory.
linear equations graphs algebra i math khan academy.
algebraic graph transformations for merging ontologies.
algebraic graph theory in the analysis of frequency assignment.
algebraic graph theory.
algebraic graph theory podgraf complement graph induced png.
graphing linear equations.
phd course algebraic graph algorithms university of copenhagen.
algebraic graph theory wikivividly.
linear equations.
topics in algebraic graph theory by lowell w beineke and robin j.
graphing algebraic functions domain and range maxima and minima.
figure 3 from algebraic graph statics semantic scholar.
algebraic graph theory revolvy.
pdf algebraic graph theory.
free algebra 1 worksheets i found perfect for supplemental work for.
algebra 1 parcc question graph y mx b voxitatis blog.
graphing linear equations.
algebraic graph transformations for merging ontologies.
slope from graph algebra practice khan academy.
trigonometric function graph math entertaining mathematics.
algebraic equations from data agenda 1 min 2 algebra graph paper.
pdf a study on isomorphism of algebraic graphs. | 884 | 4,382 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.53125 | 3 | CC-MAIN-2019-26 | latest | en | 0.826439 |
https://twiki.cern.ch/twiki/bin/view/Main/TestForTrigger?rev=3 | 1,575,632,755,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540487789.39/warc/CC-MAIN-20191206095914-20191206123914-00126.warc.gz | 586,461,834 | 10,168 | TWiki> Main Web>TWikiUsers>HuguesBrun>TestForTrigger (revision 3)EditAttachPDF
Muon HLT Efficiency Measurement with the "Reference-Trigger" Method
Goal
Very useful to measure the efficiency of complex trigger paths, such as double-muon and dimuon triggers, combination of triggers, cross-triggers, etc. The description and examples below refer to the specific cases of (combinations of) double-muon trigger paths HLT_Mu17_Mu8, HLT_Mu17_TkMu8, HLT_Mu22_TkMu8.
Ingredients
Choice of the binning
The measurement can be done in one single bin on in several bin depending of the use. The bins are just to be large enough to not lack of statistic for the efficiency computation.
Choice of the reference trigger
The measurement is based on the computation of the complex trigger paths efficiency for event passing a reference trigger. The reference trigger should be chosen to have a high efficiency on the events passing the complex trigger to not bias the result. For example, HLT_Mu17 is the best candidate for double muon triggers like HLT_Mu17_Mu8, HLT_Mu17_TkMu8, HLT_Mu22_TkMu8.
Reference trigger efficiency
The reference trigger efficiency on muon can be measured using the standard Tag and Probe method as documented here. The reference trigger efficiency on di-muon event can be computed from the reference trigger efficiency on muon as follow : if is the efficiency of the ref trigger on muon, then the efficiency on di-muon, , will be computed using this formula :
Complex efficiency after reference trigger
This is the efficiency of the complex trigger path on di-muon events triggered by the ref trigger. This efficiency can be obtained by fitting the Z peak as in usual Tag and Probe.
Complex trigger efficiency
The final number is obtain bin by bin by multiplying the efficiency of the reference trigger by the efficiency of the complex trigger after reference trigger.
Recipe
Recipe for HLT_Mu17_Mu8 and HLT_Mu17_TkMu8 paths
Reference trigger efficiency
The chosen reference trigger chosen is HLT_Mu17. As it is prescaled, it is needed to request the tag muon to be match with HLT_Mu17 in order measure an efficiency without prescale. The efficiencies of the reference trigger for event passing the POG loose working point (Global OR Tracker muon AND PF muon).
plots here
Edit | Attach | Watch | Print version | | Backlinks | Raw View | Raw edit | More topic actions...
Topic revision: r3 - 2013-08-23 - HuguesBrun
Webs
Welcome Guest
Cern Search TWiki Search Google Search Main All webs
Copyright &© 2008-2019 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback | 639 | 2,718 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.59375 | 3 | CC-MAIN-2019-51 | latest | en | 0.86994 |
https://mathematica.stackexchange.com/questions/tagged/polygons?tab=Active | 1,627,403,173,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046153392.43/warc/CC-MAIN-20210727135323-20210727165323-00622.warc.gz | 396,422,776 | 44,860 | # Questions tagged [polygons]
The tag has no usage guidance.
96 questions
Filter by
Sorted by
Tagged with
6k views
### How can we make publication-quality PlotMarkers without version 10?
Suppose that for certain reasons we are not yet using Mathematica version 10, or we have a version with buggy PlotMarkers. It is well known that the default markers ...
33 views
### Finding the shared edged between two polygons
Imagine I have two convex polygons pol1 and pol2 with a shared edge. Take, as an example, ...
61 views
### Inner polygon approximation
Suppose you want to perform an inner approximation of a planar semialgebraic set by a finite set of polygons. The following is a quick way to yield an approximation but I am not convinced that it is ...
55 views
### On the geometric transformations of polygons
Each starry set has the following property: ...
261 views
### On the ordering of the vertices of a polygon
Writing: pts = {{1, 0}, {2, 0}, {2, 3}, {3, 3}, {3, 4}, {0, 4}, {0, 3}, {1, 3}, {1, 0}}; Graphics[{Red, Point[pts], Blue, Line[pts]}] we get: and all of this ...
366 views
### Efficiently compute Minkowski sum of a 2D Region and a Disc of radius r?
Given a 2D Mathematica Region, e.g. A = Region[RegionDifference[Disk[{0, 0}, 2], Disk[{2, 0}, 1]]], how can I grow the region by an arbitrary radius r? For example, ...
213 views
### Programming twisted pyramids based on polygon spirals
I'm interested in polygonal spirals and after reading Filling Space with Pursuit Polygons, I was curious if I could extend the program to create twisted pyramids as 3D-objects. My goal is something ...
82 views
### Fix (decompose) Degenerate Polygon (vertex coincides with edge)
The code Polygon[{{0,0},{3,0},{3,3},{1,3},{1,1},{2,2},{2,1},{0,1}}] produces, according to Mathematica, a disconnected polygon with 0 area. The vertex ...
22 views
### How to find points within a polygon region [duplicate]
I have a set of points, and a polygon region, I was wondering how can I used RegionMemmber to find the points that are within or outside the region? like their exact coordinates
44 views
### Failed to compute the area of a polygon or to use a polygon as a region of NMaximize function
I have a polygon generated by some data. Sometimes, I cannot compute the area of some of the generated polygons and I cannot correctly apply function NMaxmize on ...
58 views
### Exact symbolic area of an intersection of two regular pentagons with parameters
The question of finding the area of intwrsection of two regions with parameters has been answered here using and RegionIntersection with delimeter ...
188 views
### Exact symbolic area of an intersection of two polygons with parameters
On the one hand, can handle expressions which include parameters, including polygons, for example: Area[Polygon[{{1, 2}, {-1, 1}, {2, 5}, {1, a}}]] returns ...
170 views
### How to create a simple polygon
I am trying to create a simple (with no self-intersections) polygon having four given points as vertices, for example to calculate its perimeter later. However, it seems there is nothing for this ...
23 views
### Problem NIntegrating over 3D Polygon region with 4 points
I am trying to NIntegrate a non-planar polygonal region defined by 4 points, something like this: ...
194 views
### Polygon with an extended Boundary
I have a (not necessarily convex) Polygon, and want to create another Region that can be seen as an extended boundary of the ...
225 views
### How does CirclePoints function actually work when drawing a polygon?
By reading that CirclePoints gives the positions of n points equally spaced around the unit circle, I understood that : ...
8k views
### Filling a polygon with a pattern of insets
I am trying to fill a shape with diagonal lines. I am aware of Texture, but it rasterizes the fill pattern, which is not desirable. Here was my crack at it: ...
1k views
### Rounding an Irregular Polygon
Consider a random irregular convex polygon, for example, the 6-side polygon I want to define a function that, given a certain parameter r (roundness), rounds each ...
676 views
### A Smooth and Round Voronoi Mesh
I want the edges of a VoronoiMesh to be smooth and round. I have found the following code from this answer ...
43 views
### PolygonCoordinates extract the submits in the “wrong” order
I don't understand why, in the example below, PolygonCoordinates seems unable to correctly extract the summits of a rectangle in the "right" order so we can ...
348 views
### How to draw the outline of an icon made of several polygons?
I have an icon defined like that: speakerIcon = Graphics[{ Triangle[{{0, -1}, {1,1},{-1,1}}], Rectangle[{-1, -1},{1, 0}] }, ImageSize->20] I can ...
57 views
### Workaround for Opacity + Magnify bug in version 12?
Is there a workaround for the following bug in version 12.0.0. This is in MacOSX. ...
82 views
### Order of PolygonCoordinates[] is undefined in Mathematica 12?
Probably I am missing something fundamental. I try to get a polygon's ordered coordinates, but PolygonCoordinates[] yields something pretty unordered. Am I missing some essential function? ...
127 views
### Speed-up calculation of Area of Polygons
I have a complicated polygon in the 2d plane, that is an RegionIntersection of RegionUnion of triangles. Finally, I want to ...
200 views
### Parametric equation of rolling polygon (cyclogon)
Abraham Gadalla tried (but failed) to post the following question. Why does the red point not continue to follow the square (and the curve) for the full range $0\le x\le 2\pi$? ...
204 views
### Point lattice leading to triangle lattice
My main purpose is to eventually generate a triangle lattice from one original point, let's say the origin. So I want to start with the origin and generate 6 points around it, which are the vertices ...
52 views
### Deforming a unit square into a cylinder [duplicate]
I am new to Mathematica and I am trying to create a flat unit square in the x-z plane made out of polygons with vertices that divide it into equal-length intervals on each side. Then manipulate this ...
89 views
### Combining MeshRegions of generated 3D shapes and RandomPolygons
I am interested in generating 3D a joint BoundaryMeshRegion or a MeshRegion from different shapes and polygons. This works for <...
100 views
### Discrete Sum of Function over a Region
I might have completely missed something in my search. I want to discretely sum over a function, $f(x_i,y_j)$ multiplied by some other function $g$ over a general region $S$. \sum_{(x,y)\in S}f(x,...
1k views
### Polygons crash kernel?
Having some problems Generating random Polygons and then filtering by Area, specifically filter out degenerated polygons with ...
97 views
### How to remove unwanted lines in Graphics3D [duplicate]
When I run the following code: ...
311 views
### RegionMember[ ] in polygon
Bug introduced in 10.0 and fixed in 12.0. I am seeing a difference in RegionMember[ ] testing a point inside a 3-vertex polygon and testing a point inside a more ...
327 views
### Creating a MeshRegion object from a Graphics object
I am looking to convert a simple Graphics object that is defined by overlapping polygons into a MeshRegion object. For example, here is code that will create three overlapping triangles ...
67 views
### Problem defining a polygon
I have two polygons that are very similar as: ...
167 views
### Area of Generalized Koch Snowflake
I asked on the Math Stack Exchange here how I could find the area of a "generalized Koch snowflake". An $n$th generalized Koch snowflake, in my case, is formed almost the same as the Koch snowflake - ...
274 views
### Unexpected behavior of the procedure Area on the object 'Polygon'
Bug introduced in 11.3 or earlier and fixed on 12.0. Sometimes get a results, sometimes left unevaluated. For instance ...
310 views
### How to get size of each polygon of a Voronoi diagram using Shoelace formula?
The following code gets all vertices of all polygons (mesh cells) of VoronoiMesh[pts]: ...
761 views
### Is it possible to draw a hollow circle using polygon?
The question is simple. I have the following commands ...
2k views
### Calculate the total length of line segments within polygon
So we have a polygon with N vertices located on grid. All vertices are located at the intersection of cells (so their coordinates are integers). The objective is to calculate the total length of line ...
104 views
Is there a nice way I can find all possible triangulations (or quadrangulations) of a polygon (say 10-gon)? I want to have all possible diagrams with the diagonals marked (i.e. with unique name). ...
233 views
### Detect and fix invalid polygon
I have a polygon given by ...
187 views
### Others problems with GraphicsPolygonUtilsPolygonCombine
As a continuation of this question, writing: ...
248 views
### Problem with GraphicsPolygonUtilsPolygonCombine
Writing: Normal@ParametricPlot[{x, x^2 t}, {x, 0, 2}, {t, 0, 1}] /. p : {__Polygon} :> GraphicsPolygonUtilsPolygonCombine[p] I get: but if I write: ...
664 views
### How can I get the vertices of each polygon of a Voronoi diagram?
For the following image; How can I get the vertices (which are highlighted by orange dots) of each Voronoi polygon of the following Voronoi diagram? The picture is from the Internet. It is just for ...
245 views
### Getting random points on a mesh region
So I have a region defined by a MeshRegion,like ...
393 views
### How can I build a 3D slab of hexagons or a thick hexagon mesh?
So I tried this code for 2D, but I would like to have say a slab with a defined thickness The code for 2D is ...
120 views
### Graphics3D ignores EdgeForm
Works great: Graphics[{FaceForm[White], EdgeForm[AbsoluteThickness[10]], Polygon[{{0, 0}, {1, 0}, {0, 1}}]}] Ignores color and thickness: ...
214 views
### Manipulate a polygon
I am an absolute beginner in Mathematica. For a small test project, I am supposed to change the following code such that the corners of the polygon {0, 0, 0}, {1, 1, 1}, {0, 1, 0}, {1, 0, 1} can be ... | 2,437 | 10,099 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.125 | 3 | CC-MAIN-2021-31 | latest | en | 0.885962 |
http://slideplayer.com/slide/5159729/ | 1,596,817,818,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439737204.32/warc/CC-MAIN-20200807143225-20200807173225-00346.warc.gz | 100,591,453 | 25,995 | # Class notes for ISE 201 San Jose State University
## Presentation on theme: "Class notes for ISE 201 San Jose State University"— Presentation transcript:
Probability & Statistics for Engineers & Scientists, by Walpole, Myers, Myers & Ye ~ Chapter 8 Notes
Class notes for ISE 201 San Jose State University Industrial & Systems Engineering Dept. Steve Kennedy 1
Populations & Samples A population is the set (possibly infinite) of all possible observations. Each observation is a random variable X having some (often unknown) probability distribution, f (x). If the population distribution is known, it might be referred to, for example, as a normal population, etc. A sample is a subset of a population. Our goal is to make inferences about the population based on an analysis of the sample. A biased sample, usually obtained by taking convenient, rather than representative observations, will consistently over- or under-estimate some characteristic of the population. Observations in a random sample are made independently and at random. Here, random variables X1, X2, …, Xn in the sample all have same distribution as the population, X.
Sample Statistics Any function of the random variables X1, X2, …, Xn making up a random sample is called a statistic. The most important statistics, as we have seen are the sample mean, sample variance and sample standard deviation:
Sampling Distributions
Starting with an unknown population distribution, we can study the sampling distribution, or distribution of a sample statistic (like Xbar or S) calculated from a sample of size n from that population. The sample consists of independent and identically distributed observations X1, X2, …, Xn from the population. Based on the sampling distributions of Xbar and S for samples of size n, we will make inferences about the population mean and variance and . We could approximate the sampling distribution of Xbar by taking a large number of random samples of size n and plotting the distribution of the Xbar values.
Sampling Distribution Summary
Normal distribution: Sampling distribution of Xbar when is known for any population distribution. Also the sampling distribution for the difference of the means of two different samples. t-distribution: Sampling distribution of Xbar when is unknown and S is used. Population must be normal. Also the sampling distribution for the difference of the means of two different samples when is unknown. Chi-square (2) distribution: Sampling distribution of S2. Population must be normal. F-distribution: The distribution of the ratio of two 2 random variables. Sampling distribution of the ratio of the variances of two different samples. Population must be normal.
Central Limit Theorem The central limit theorem is the most important theorem in statistics. It states that If Xbar is the mean of a random sample of size n from a population with an arbitrary distribution with mean and variance 2, then as n, the sampling distribution of Xbar approaches a normal distribution with mean and standard deviation /n . The central limit theorem holds under the following conditions: For any population distribution if n 30. For n < 30, if the population distribution is generally shaped like a normal distribution. For any value of n if the population distribution is normal.
We often want to test hypotheses about the population mean (hypothesis testing will be formalized later). Example: Suppose a manufacturing process is designed to produce parts with = 6 cm in diameter, and suppose is known to be .15 cm. If a random sample of 80 parts has xbar = cm, what is the probability (P-value) that a value this far from the mean could occur by chance if is truly 6 cm?
Difference Between Two Means
In addition, we can make inferences about the difference between two population means based on the difference between two sample means. The central limit theorem also holds in this case. If independent samples of size n1 and n2 are drawn at random from two populations, discrete or continuous, with means 1 and 2, and variances 12 and 22, then as n gets large, the sampling distribution of Xbar1 - Xbar2 approaches a normal distribution with
Difference of Two Means Example
Suppose we record the drying time in hours of 20 samples each of two types of paint, type A and type B. Suppose we know that the population standard deviations are both equal to 1/2 hour. Assuming that the population means are equal, what is the probability that the difference in the sample means is greater than 1/2 hour? P(z > 3.16) = Very unlikely to happen by chance.
t-Distribution (when is Unknown)
The problem with the central limit theorem is that it assumes that is known. Generally, if is being estimated from the sample, must be estimated from the sample as well. The t-distribution can be used if is unknown, but it requires that the original population must be normally distributed. Let X1, X2, ..., Xn be independent, normally distributed random variables with mean and standard deviation . Then the random variable T below has a t-distribution with = n - 1 degrees of freedom: The t-distribution is like the normal, but with greater spread since both and have fluctuations due to sampling.
Using the t-Distribution
Observations on the t-Distribution: The t-statistic is like the normal, but using S rather than . Table value depends on the sample size (degrees of freedom). The t-distribution is symmetric with = 0, but 2 > 1. As would be expected, 2 is largest for small n. Approaches the normal distribution (2 = 1) as n gets large. For a given probability , table shows the value of t that has P() to the right of it. The t-distribution can also be used for hypotheses concerning the difference of two means where 1 and 2 are unknown, as long as the two populations are normally distributed. Usually if n 30, S is a good enough estimator of , and the normal distribution is typically used instead.
Sampling Distribution of S2
If S2 is the variance of a random sample of size n taken from a normal population with known variance 2, then the statistic 2 below has a chi-squared distribution with = n-1 degrees of freedom: The 2 table is the reverse of the normal table. It shows the 2 value that has probability to the right of it. Note that the 2 distribution is not symmetric. Degrees of freedom: As before, you can think of degrees of freedom as the number of independent pieces of information. We use = n-1, since one degree of freedom is subtracted due to the fact that the sample mean is used to estimate when calculating S2. If is known, use n degrees of freedom.
F-Distribution Comparing Sample Variances
The F statistic is the ratio of two 2 random variables, each divided by its number of degrees of freedom. If S12 and S22 are the variances of samples of size n1 and n2 taken from normal populations with variances 12 and 22, then has an F-distribution with 1 = n1 - 1 and 2 = n2 - 1 degrees of freedom. For the F-distribution table Note that since the ratio is inverted, 1 and 2 are reversed.
F-Distribution Usage The F-distribution is used in two-sample situations to draw inferences about the population variances. In an area of statistics called analysis of variance, sources of variability are considered, for example: Variability within each of the two samples. Variability between the two samples (variability between the means). The overall question is whether the variability within the samples is large enough that the variability between the means is not significant. | 1,753 | 7,607 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2020-34 | latest | en | 0.904984 |
http://physgre.s3-website-us-east-1.amazonaws.com/1986%20html/1986%20problem%2053.html | 1,516,145,658,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084886758.34/warc/CC-MAIN-20180116224019-20180117004019-00554.warc.gz | 269,404,768 | 1,970 | ## Solution to 1986 Problem 53
The power radiated per unit solid angle for an oscillating electric dipole is proportional to $\sin^2 \theta$, where $\theta$ is the angle between the electric dipole moment and point where the radiation is being measured. This implies that the radiated power per unit solid is $0$ along the $x$ axis and maximum on the $y$-$z$ plane. This implies that it is answer (A) or (C) that is correct. To determine whether the electric field is the $xy$-plane or in the $\pm z$ direction, recall that the electric field in the radiation zone of an oscillating electric dipole is in the $\hat{\theta}$ direction which in this case is in the $x$-$y$ plane. Therefore, answer (C) is correct. | 178 | 713 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 11, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2018-05 | longest | en | 0.916199 |
https://www.mql5.com/en/market/product/38656?source=Site+Market+Main+Rating004 | 1,618,813,721,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00624.warc.gz | 961,881,101 | 44,883 | • Overview
• Reviews
• What's new
# EA Hedging for MT5
The EA works on the MT5 netting account terminal. (you can test the EA in the tester and optimize the parameters on the history!)
EA strategy:
We know that highly correlated currencies almost always behave in a mirror fashion. But there are moments of deviation (divergence of currencies) from the normal value.
The EA opens orders in the direction of currency convergence.
If the discrepancy occurs for a long time, then the refilling system is used.
Further, when the total profit reaches CloseProfit (the value in the Deposit currency at which all positions are closed), the EA closes trades.
For trading, select currencies with a high correlation, for example:
EURUSD - GBPUSD
AUDUSD - NZDUSD
In the parameters there is a point
Instrument_2
This is the name of the second instrument (the second currency pair). The first instrument is the one on which the EA is installed.
If you leave Instrument_2 empty and set the EA to one of the above pairs, the EA will determine the second instrument itself.
If You need other options, you can write any instrument name in Instrument_2.
You can use other options, such as gold - silver or oil and gas or shares of state corporations.
The main thing that they have a high percentage of correlation.
The EA does not use indicators. It analyzes the divergence of instruments (in our case currency pairs) in a given area.
Option BarsWind is the number of candles on which the analysis of differences of currencies.
If the charts of the instruments diverge by a certain distance, the EA opens a pair of counter trades in the hope,
that the tools would come together again.
Delta (currency divergence) at which the first positions are opened is calculated by the following parameters:
K_Min_Points = 1.5 - coefficient of the minimum Delta for open positions.
For example, over the last 100 candles instruments diverged by a maximum of 200%.
The EA will not open positions if the current divergence is less than 133 = (200/1.5)
K_Max_Points = 1.01 - factor rollback of the maximum deltas for position opening.
For example, the maximum divergence for the entire period occurs on the current candle and is 150%.
We assume that the discrepancy may increase further and therefore do not open positions immediately.
To positions are not opened immediately and need this option. It does not allow trading until the Delta is reduced by a factor
up to level 148 = (150/1.01)
StartDelta1 - minimum Delta of position opening prevents opening when instruments do not diverge for a long time. This is the minimum Delta at which opening is allowed.
Dolivka - topping up additional positions. If, after opening positions, the prices of instruments continue to diverge,
the adviser produces topping up. Points of loss of positions are calculated and if the total difference is more than Dolivka open additional positions.
If you wish, you can transfer the EA to manual trading.
To do this, in the lower right corner there are three buttons
Manual - puts the EA into manual mode. At the same time he does not open or close positions.
The Open button opens a pair of counter positions for the two instruments in the direction of the Delta convergence.
The Close button closes all positions.
Recommended products
AMG pro
Pavel Korkin
Specification - EUR / USD - Raw spread, 0 spread, commission 5.5 \$ per lot If you have any questions - feel free to contact me for help. Advantages - Fully adapted to real trading conditions - Opens trades only on the opening of the bar - Profit is fixed by Take Profit (at start setting) - Risk-management or fix lot - For fans there are virtual Stop levels Optoins WorkTF - working timeframe UseVirtualStops - use virtual stop loss UseRealStops - use real stop loss Close Mode
100 USD
EA Perfect Balance EURUSD h1 MT5
Sergey Demin
Full version. EURUSD Timeframe H1. Minimum initial deposit = \$ 1000 with lot = 0.01 It is safer to trade with a \$ 2,000 deposit with an initial lot = 0.01 Average number of transactions per month = 24 Attention: the EA is configured for trading with 0.01 lot. If the Trader wants to work with a lot different from 0.01 then: • If you use a starting lot = 0.02 - Maximum lot size when using Balance = 2.0 (min deposit = \$ 2000) • If you use a starting lot = 0.03 - Maximum lot size when u
210 USD
Colombus
Abdurahim Aras
It has been developed by A Academy for 2 years by expert staff. A great Expert Advisor was born. Please apply the recommended settings. ---------------------- Stop_Loss: 300 Current_Value: 149 Trailing Value: 150 Take_Profit: 300 Stop_Time:22:00 Start_Time:09:00 Magic_Number:2020 --------------------- You will see the best result on the XAU/USD D1 chart. --------------------- Contact: arasinvestment@gmail.com Abdurahim Aras - International Trade Expert -A Markets-Turkey Introdoctr Brok
1 000 USD
Profit Only MT5
Aleksandr Bebishev
Only profit! This is a smart grid trading robot. It works on my original algorithm! At medium risks, it shows more than 100% profit per year. ( Tested exclusively on real ticks and real accounts ) This robot is created to receive income more than any bank offers. It trades automatically around the clock on any currency pair! ( I do not recommend "exotic" currency pairs ) Adjustable risks. Adjustable position volume. Adjustable mesh pitch. Adjustable profit level. Not tied to a time f
1 000 USD
AW Recovery EA MT5
Alexander Nechaev
Advisor is a system designed to restore unprofitable positions. Simple setup, delayed launch during drawdown, locking, disabling other expert advisors, averaging with trend filtering and partial closing of a losing position built into one tool. The system of automatic calculations will prepare the settings in a few clicks. The use of partial closing of losses allows you to reduce losses with a lower deposit load, which ensures a more secure work with losses, in contrast to the grid strategies o
145 USD
ScorpionGrid
Evgenii Kuznetsov
Multi-currency grid Expert Advisor, in most cases it receives quite accurate entries. If an entry is not accurate enough, the positions are managed using an elaborate martingale strategy. Work on real account: http://tiny.cc/nkaqmz Entries are performed using the signals of RSI and Stochastic indicators, in the overbought/oversold areas, plus additional entry conditions based on a proprietary algorithm. Recommended timeframe - М15. Input parameters SetName - name of the current settings file Ma
80 USD
MarSe MT5
Roman Gergert
This Expert Advisor uses additional filters to determine the trend. It protects orders with the help of locking. The robot uses martingale. This EA use pending orders. Sooner or later martingale can lose the deposit, it is recommended to control the trading of the EA while analyzing the market. The EA trades on any symbol. The minimum recommended deposit is \$1000 (it is better to start with a cent account). The EA works on M1 timeframe. Telegram channel Settings: https://drive.google.com/file/
30 USD
VHV Trend U
LAUNCH PROMO -- Buy One EA, if you like it, put feedback and get the second for Free This is the opposite EA of my other EA "VHV Trend D" This EA is based on Up Trends with a customized intelligent algorithm in combination with RSI . Not too many parameters, it is very simple to use. Live Signal https://www.mql5.com/en/signals/867539 Recommended Time Frame is H1. Recommended Currency Pair : any currency pair can work (Tested on EURUSD) Amount: 100 \$ Fixed Lot, default 0.1 - you can chan
50 USD
Lock balancer MT5
Lock is a powerful tool for saving a trader’s money. Instead of the traditional stop loss, you can now use this robot. The robot will set the pending lock. With a sharp movement of the price against the trader, the lock becomes market, and therefore the loss does not increase. The main position is maintained and will bring profit as soon as the robot selects the right moment to unlock. The robot can be used to insure positions during manual trading, or as an addition to another robot. Princip
250 USD
QA Magic Recovery and Trade MT5
Volha Loyeva
MT4 Version QA Recovery .set + Manual Be sure to install the set file QA Recovery - this trading strategy is based on historically formed axioms of oversold and overbought zones. The advisor contains authors private algorithm for determining the most accurate entry points, as well as multiple indicators for additional confirmation. QA Recovery - completely novel trading system that is able to flawlessly adapt to the ever-changing market conditions, by virtue of its comprehensive plasticity and
99 USD
BusyGrid MT5
Grid EA designed for the hedging mode of the terminal. It works based on the median price of the bars. The EA implements protection against stop-outs. However, it is highly recommended to use a reasonable stop loss level and not to allow uncontrolled drawdowns on the account! The minimum deposit for trading is \$75 It is recommended to disable the EA and close all positions before the releases of high-impact news! Input Parameters TakeProfit — Take Profit value in points; StopLoss — Stop Loss v
FREE
The EURUSD looking at the Majors
Marta Gonzalez
The EURUSD looking at the Majors is a Multipair System for operate the EURUSD !!!!!IMPORTANT!!!!! THIS EA IS FOR USED IN EURUSD ONLY. !!!!!IMPORTANT!!!!! This system detects a multi-pair entry point based on the value of the majors and operates that entry in the EURUSD with its own and efficient algorithm. All tickets have an SL and TP and use optimized trailing. You have to use the name of the value in your broker, in the corresponding inputs. Very stable growth curve as resul
30 USD
Three Bar Break Free MT5
Stephen Reynolds
This is the free version of 3 Bar Break. The only difference is the lotsize has been disabled to 0.01. 3 Bar Break is based on one of Linda Bradford Raschke's trading methods that I have noticed is good at spotting potential future price volatility. It looks for when the previous bar's High is less than the 3rd bar's High as well as the previous bar's Low to be higher than the 3rd bar's Low. It then predicts the market might breakout to new levels within 2-3 of the next coming bars. Must als
FREE
Ilan trailing
Denis Kolomensky
Ilan trailing for MetaTrader 5 Determines the direction of trading according to candlestick patterns. Settings: Initial volume. Lot increasing exponent. Density of averaging (step of bends opening). Maximum amount of averaging (the maximum number of bends). Take Profit in points. Selecting the volume accuracy (number of digits after a decimal point in the volume). Magic number of the Expert Advisor. Slippage. Trailing Stop. Maximum possible loss as a percentage of the deposit.
11 USD
Money Printing Machine MT5 Bargain Hunter
Caroline Huang
Introduction The fundamental difference in Stocks compared to Forex market is that the stock price of financially sound companies will grow over time. However, from time to time, the stock price will dip or fall due to institutional profit taking, short-term under-performance and macro adverse economic news. This opens up great a opportunity to then enter the trade. This is where the Money Printing Machine MT5 Bargain Hunter EA comes in handy! The Money Printing Machine MT5 Bargain Hunter EA i
699 USD
Wild Side MT5
Marat Baiburin
159\$ valid until 19.04.2021 Price from Monday 199\$ Final price 2000\$ Live monitoring of the Wild Side https://www.mql5.com/ru/signals/962021 Demo monitoring of the Wild Side https://www.mql5.com/ru/signals/956622 MetaTrader 4 version https://www.mql5.com/ru/market/product/64857 Wild Side is a fully automatic expert Advisor for trading on the night Forex market. It does not use martingale, grid, or other "toxic" methods. Each trade is accompanied by a stop loss and take prof
199 USD
Robo Long Short
Rafael Guimaraes Carneiro
ATENÇÃO: Dúvidas e suporte -> entre em contato pelo whatsapp clicando neste link ou adicione diretamente o telefone (12)99135-1768. Ou então entre em contato pelo chat deste site (mql5.com). O Robô Long Short foi criado para operar ativos correlacionados, onde um ativo é vendido e outro é comprado, buscando explorar o fato de que após fugir da média, a relação entre estes ativos tende a voltar a ela. Vídeos Instalando o Metatrader https://www.youtube.com/watch?v=mYO0NSAGQs0&list=PLKpQO3ODX
100 USD
CAP Universal Grid EA MT5
Grid trading is a highly efficient and mechanical trading strategy which has no reliance on direction, profits from volatility and uses the intrinsic wavy nature of the market. [ Installation Guide | Update Guide | Submit Your Problem | FAQ | All Products ] Easy to set up and supervise No indicators or hard analysis needed Grid trading is time-frame independent Requires little forecasting of the market Extracts money out of the market regularly Key Features Easy to set up and supervi
39 USD
The safest Martin
Yong Fan
The safest Martin EA实际表现可见信号 https://www.mql5.com/zh/signals/814986 The safest Martin 是一个多货币 EA,采用马丁策略,以其自有的波段算法为基础,并配以控仓技巧。只有价格到达关键位后EA才会发生交易。 The safest Martin 使用即时交易 4种货币对:EURUSD、AUDNZD、NZDUSD、USDCAD,GBPCAD。算法信号通过十年数据跑测验证,可以实现平稳盈利。 EA 在所有时间框架上都有效,不会丧失其盈利能力。然而,在 H4上观察到了最大效率。在此周期上的风险/盈利比最好。 建议使用账户余额在美元 10000 以上,且每一万美金的仓Lots建议0.01开始。 Lots 是选择固定手数后的具体数值。 H01Symbol--H05Symbol 是参与操作的7个货币对。请根据交易商特有商品表示对应改动。货币对后缀添加。
388 USD
Stormer RSI 2
Ricardo Rodrigues Lucca
This strategy was learned from Stormer to be used on B3. Basically, 15 minutes before closing the market, it will check RSI and decided if it will open an position. This strategy do not define a stop loss. If the take profit reach the entry price it will close at market the position. The same happens if the maximal number of days is reached. It is created to brazilian people, so all configuration are in portuguese. Sorry Activations allowed have been set to 50.
99 USD
Golden Grid
Motsumi Goitse-modim( Makhene
One of the best EA's that uses grid trading technology to its full potential buy integration of AI algorithms that assist the bot in taking better trades than any ea. Disadvantages Ea requires substantial amount of capital to trade with Hedging account is required to acquire high profits Advantages Ea is simple and easy to use. Has a high ROI (Return On Investment) Minimal effort is required for maintaining parameters. Has easy to understand parameters. EA is profitable on the onset. EA
1 200 USD
Ultimatum Breakout MT5
Ruslan Pishun
Ultimatum Breakout - this trading system uses the strategy of valid breakouts, using multiple custom indicators for eliminating bad signals. The EA uses a very small SL so the account is always protected from equity drawdown with a very low risk-per-trade. The EA is fully adapted: calculates the spread — for pending orders, stop loss, trailing stop, breakeven. It was backtested and optimized using real ticks with 99,9% quality. It has successfully completed stress testing. No Martingale. No arb
68 USD
Standart ERT
Valerii Andriunichev
Fully automated trading system for MT5! The Expert Advisor is able to trade on one terminal with different types of instruments. Controls risks using Stoploss and TakeProfit settings. Uses TrailingStop to gain profit if the price has not reached TakeProfit and reversed. The robot does not conflict with itself in processing open positions of the position.There is a report on each trade in the logs with a detailed description of their actions. Uses the most optimal and effective indicators for tr
290 USD
Edge Edit MT5
Alessandro Grossi
Prezzo START! 30 USD (per le prime 10 copie) successivamente verrà aumentato del 50% ogni 10 copie vendute per preservare la strategia dallo sfruttamento di massa! Questo EA sfrutta la tecnica dell'hedging aprendo simultaneamente posizioni Buy e Sell a distanza aumentando la dimensione del Lotto per far si che i Take Profit di entrambi siano sempre vicini al prezzo attuale. E' uno strumento molto potente che crea dei profitti altissimi. Può essere totalmente configurato in base alle proprie
280 USD
Darkray FX EA
Daut Junior
More informations at Telegram group: t.me/DARKRAYFXEAEN Darkray FX EA uses a return-to-average strategy coupled with buying and selling exhaust zone detection. ️ Expert Advisor for Forex ️ Any Symbol, CDFs, etc. ️ Developed for Metatrader 5 ️ Swing/Position trading ️ Accuracy higher than 95% ️ Low Drawndown Indicators available for setups settings: EMA200 • moving average of 200 periods (other periods can be used with excellent results as well); RSI • Checks the levels on sale for th
49 USD
Babel Assistant
Iurii Bazhanov
Babel assistant 1 The MT5 netting “ Babel_assistant_1 ” robot uses the ZigZag indicator to generate Fibonacci levels on M1, M5, M15, H1, H4, D1, W1 periods of the charts , calculates the strength of trends for buying and selling. It opens a position with "Lot for open a position" if the specified trend level 4.925 is exceeded. Then Babel places pending orders at the some Fibonacci levels and places specified Stop Loss , Take Profit. The screen displays current results of work on t
FREE
Rule Based Custom EA
Jens Noesel
I give this EA for Free. But i would be happy, if you share profitable configs with me. Thanks. With the RuleBasedCustomEA you can prove and trade your strategy without coding a single line. You can combine 10 rules out of 180 for the perfect entry. And 2 exits with each 3 rules. With over 180 Rules for entry and exits you can build an nearly unlimited amout of Tradingrobots. At this time 12 Indicators are supported. BB, ADX, ATR, Stoch, Envelopes and so on. More will follow. Attention : -
FREE
High frequency trend tracking
Xiong Luo
Use small stops + tracking stops to capture maximum profit in short term trends. Fixed initial trading volume of 0.01 hands. Trading volume will be dynamically added by 1% based on the current balance. This EA belongs to high frequency trading, which is risky. Please use it carefully for firm trading. Backtest only applies to select mode 1 Minutes OHLC. Apply to most currency pairs and gold. Recommend GOLD XAUUSD GBPUSD EURUSD ..... Parameter description: Timeframes: Timeframe selection. Defau
299 USD
EA Nine MT5
Ruslan Pishun
The EA uses 3 strategies: 1 - a strategy for breaking through important levels, 2-a strategy for reversing, and 3 - trading on a trend. To open a position, the expert Advisor evaluates the situation on the charts using 5 custom indicators. The EA also uses averaging elements in the form of several additional orders ( the EA does not increase the lot for averaging orders). The EA uses 8 adaptive functions for modifying and partially closing orders. Hidden Stop loss, Take Profit, Break Even and Tr
125 USD
EA Red Vision MT5
Ruslan Pishun
The Expert Advisor uses two strategies based on indicators: Moving Average, Envelopes, Commodity Channel Index, Average True Range. Trading takes place according to certain trend patterns (patterns of the price market), the adviser searches for a signal pattern and opens an order, then an order for averaging the first order can be opened. If the first order is closed at a loss, then the EA will open the second order with an increased lot. The adviser also uses economic news to achieve more accur
134 USD
Buyers of this product also purchase
BigExpert mt5
Renate Gerlinde Engelsberger
>>>>>>>>>>>>>>>>>>>> 65% discount (1000 USD >>> 345 USD) , special for the Next 10 buyers (1 left) <<<<<<<<<<<<<<<<<<< BigExpert is a fully automatic expert built for the EURUSD currency pair. BigExpert does not use Martingale and Grid, All Trades are covered by StopLoss and TakeProfit . BigExpert only works in EURUSD currency pairs. BigExpert has been tested for more than 11 years in Strategy Tester . BigExpert trades in most time frames, but the suggested time frame
345 USD
Paris MT5
Ruben Octavio Gonzalez Aviles
Paris is a 2-in-1 adaptive and dynamic algorithm with more than 99% profitable trades in the historical backtest. It usually opens multiple trades per week and closes them mostly within the same day. Introductory offer: Only 4 of 10 copies of this EA will be sold at the current price. Next price: \$399 NO martingale, smart recovery, grid trading or averaging in this algorithm. Be careful with such methods as they can quickly wipe out your portfolio. Recommended Broker: www.vant
349 USD
Cairo MT5
Ruben Octavio Gonzalez Aviles
Cairo is a fast and dynamic algorithm with more than 99% profitable trades in a historical backtest. It usually opens multiple trades per week and closes them mostly within the same day. Introductory offer: SOLD OUT ! Only 2 of 10 copies of this EA will be sold at the current price. Next price: \$499 NO martingale, smart recovery, grid trading or averaging in this algorithm. Be careful with such methods as they can quickly wipe out your portfolio. Recommended Broker: www.vantage
399 USD
Profalgo Limited
MORE THAN 3 YEARS OF LIVE RESULTS ALREADY -> https://www.mql5.com/en/signals/413850 CURRENTLY AT 60% DISCOUNT OF FINAL PRICE ONLY 1 COPY LEFT AT 390\$ Final price -> 990\$ !! BUY ADVANCED SCALPER AND RECEIVE 1 EA FOR FREE !!* (*conditions apply, contact me after purchase) !! READ SETUP GUIDE BEFORE RUNNING THE EA !! -> https://www.mql5.com/en/blogs/post/705899 Advanced Scalper is a professional trading robot that has been in development for many years. It uses very advanced exi
390 USD
Ivan Bebikov
The MT4 version of the expert Advisor is here - https://www.mql5.com/en/market/product/49187 (In MT5 and MT4 versions, there may be slight differences due to the implementation of the code on MT5) ------------------------------------------------------------------------------------------------------------------------------------------------------------- I recommend testing on a broker like my report (IC Market. s). (ECN account (Raw spread/Razor etc.)) The period for testing with 100% quality
499 USD
Exp TickSniper PRO FULL
Exp-TickSniper - high-speed tick scalper with auto-selection of parameters for each currency pair automatically. Do you dream of an adviser who will automatically calculate trading parameters? Automatically optimized and tuned? The full version of the system for MetaTrader 4: TickSniper scalper for MetaTrader 4 The EA has been developed based on experience gained in almost 10 years of EA programming. The EA strategy works with any SYMBOLS. The timeframe does not matter. The robot is based
300 USD
Aura Gold EA MT5
Stanislav Tomilov
75% discount (1999 USD > 499 USD) , special for the Next 10 buyers (2 left) Telegram channel: https://t.me/aura_gold_ea Aura Gold EA is a fully automated EA designed to trade GOLD only. It is based on machine learning cluster analysis and genetic algorithms. EA contains self-adaptive market algorithm, which uses price action patterns and standard trading indicators (CCI,ATR). Expert showed stable results on XAUUSD in 2011-2020 period. No dangerous methods of money managment used, no ma
499 USD
Euro Master MT5
Stanislav Tomilov
Price: \$499 4 of 10 copies left at this price Next price \$599 Live signal https://www.mql5.com/en/signals/866039 Telegram channel: https://t.me/aura_gold_ea Euro Master is an advanced, fully automated Expert developed to trade with EURUSD. Expert uses unique artificial intelligence technology for market analysis to find the best entry points. EA contains self-adaptive market algorithm with reinforcement learning elements. Reinforcement machine learning differs from supervised learning in a
499 USD
Swiss MT5
SWISS is a fully automated Price Action expert adviser designed to trade Gold(XAUUSD) , EURUSD and USDCHF >>>>>>>>>>>>>>>>>>>> 65% discount (999 USD >>> 349 USD) , special for the Next 10 buyers (6 left) <<<<<<<<<<<<<<<<<<< Normal price \$999 SUGGESTED BROKER FOR SWISS Telegram channel https://t.me/NeuralEngineEA What is Price Action? Price action is the movement of a security's price plotted over time. Price action forms the basis for all technical analysis of a stock, co
349 USD
London Pride EA MT5
Dmitrii Golubev
Live Signal https://www.mql5.com/en/signals/943594 Telegram Channel https://t.me/Successful_Forex_EA Recommended VPS: https://ruvds.com/pr5256 Recommended Broker: https://ru.weltrade.com/?r1=ipartner&r2=36541 London Pride is a high-end, totally automated Expert Advisor specifically devised to trade with GBPUSD currency pair. It is ideal for working on M15 timeframe. The EA uses sophisticated machine learning and self-adaptive algorithms to bring your
499 USD
EuroClassic
Sergej Sergienko
Price: \$299 8 of 10 copies left at this price Next price \$349 Live Signal https://www.mql5.com/en/signals/951858 Signal https://www.mql5.com/en/signals/945827 MT4 version https://www.mql5.com/en/market/product/63003?source=Site+Market+Product+Page I glad to introduce you classical EA, EuroClassic . Why Classic, because it uses standard indicators of mt5 platform(Moving Average, CCI,ADX,MACD), EA has Take Profit and Stop Loss, There are no dangerous methods of traiding(Martingale, Grid or somet
299 USD
GOLD EAgle mt5
Evgenii Aksenov
One Copy left for \$345 !!! Next price \$445 User's Manual: here Live signals: here Our programs : here Subscribe to our Telegram This GOLD EAgle is created for the XAUUSD (GOLD). This is a trend strategy that uses the TrendLine PRO indicator and trades on the M5 timeframe ( download the set file ) BLOG of GOLD EAgle: here The EA has a mobile trading panel for managing auto-trading functions and the ability to open trades manually.
345 USD
FrankoScalp MT5
Konstantin Kulikov
I have been developing, testing and correcting this automated scalping system for a long time: https://www.mql5.com/en/signals/author/test-standart . Last settings for EA in the comment : #79 Telegram channel with news: https://t.me/EAFXPRO Telegram group for questions and discussion: https://t.me/EAFXPRO_chat Currency pairs for which sets are developed: USDCHF, EURCHF, CADCHF, USDCAD, EURUSD, EURGBP, EURAUD, EURCAD, GBPUSD, GBPAUD, GBPCAD, NZDCAD, NZDUSD, AUDUSD, AUDCAD, AUDJPY, CHFJPY
397 USD
NightVision MT5
NightVision EA MT5 - is an automated Expert Advisor that uses night scalping trading during the closing of the American trading session. The EA uses a number of unique author's developments that have been successfully tested on real trading accounts. The EA can be used on most of the available trading instruments and is characterized by a small number of settings and easy installation. Live signal for NightVision EA: https://www.mql5.com/ru/signals/author/dvrk78 Recommended FX broker : IC M
349 USD
PZ Hedging EA MT5
Arturo Lopez Perez
This EA will turn your losing trades into winners using a unique imbalanced hedging strategy. Once the initial trade moves into negative territory a predefined number of pips, the recovery mechanism will kick in: it will place a limited amount of alternative trades above and below the current price, until all of them can be closed with a small net profit. [ User Guide | Installation Guide | Update Guide | Troubleshooting | FAQ | All Products ] Features Easy to use and supervise Trade easily
299 USD
True Range Pro MT5
Smart Forex Lab.
48 hour SALE 50% OFF Accurate Night/Intraday Scalping & Smart Grid System Expected perfomance 5-10% / mo (eurusd M5) 20-40% / mo (gbpusd M1) Features 100% automated trades Hard stoploss for every position Dynamic basket takeprofit High spread protection Fixed & Auto volume Easy to use & ready to go 2014-2020 99% tick quality optimized & backtesed True Range Pro MT4 >> https://www.mql5.com/en/market/product/54575 Set files https://www.mql5.com/en/blogs/post/726783 Monitoring
295 USD
AW Scalping Dynamics MT5
Alexander Nechaev
Advanced trading on trend reversals. The EA does not use grids or martingale. But, if necessary, they can be used in the input settings. 3 types of notifications and position locking when the maximum basket load is reached. The default settings are recommended for EURUSD on the M15 timeframe. Troubleshooting, appeal to the author - > https://www.mql5.com/en/blogs/post/741436 How the EA trades: First of all, when trading, the current trend activity is taken into account (the " Main T
395 USD
TrendLine GRID
Evgenii Aksenov
One Copy left for \$345 !!! Next price \$445 Monitoring deals: here Our programs : here Subscribe to our Telegram This is a grid EA based on the signals of the Trend Line PRO indicator: https://www.mql5.com/en/market/product/42399 for hedge accounts Trend Line GRID EA trade the first order based on the indicator signal and builds a grid if the price deviates. After a certain number of orders, the DrawDown Reduction function is enabled, which reduces the m
345 USD
Michela Russo
299 USD
Panoptic Dragon MT5
Aleksei Krasov
"Panoptic Dragon" is a high accuracy automatic trading solution. Expert is easy to set up and is suitable for both experienced traders and beginners . Only one trade at a time. Every trade has Stop Loss and Take Profit from very beginning, and they do not change. This robot enters the market during the London Stock Exchange (LSE). It is based on renko patterns, levels support and resistance, channel indicators with different periods and daily moving average. The direction is determined on the
599 USD
PZ Stop And Reverse EA MT5
Arturo Lopez Perez
This EA recovers losing trades using a sequential and controlled martingale: it places consecutive trades with increasing lot sizes until the initial trade is recovered, plus a small profit. It offers a complete hassle-free trading framework, from managing initial trades to recovery. Please note that the EA implements a martingale, which means that it takes balance losses that are recovered later. [ User Guide | Installation Guide | Update Guide | Troubleshooting | FAQ | All Products ] Featu
299 USD
Alexander Nechaev
An advanced tool for swing trading on corrective price movements. It works on trend rollbacks in the direction of its continuation, the size of the required correction is determined by the current volatility of the instrument or manually by the trader. After detecting a correction along the current trend, the EA waits for a signal to complete the correction and continue the trend, after which it opens a position. Instructions on how the advisor works - https://www.mql5.com/en/blogs/post/742819
395 USD
Trend Line PRO EA mt5
Evgenii Aksenov
One Copy left for \$295 !!! Next price \$395 The Expert Advisor trades on the signals of the Trend Line PRO indicator: download Orders are managed automatically. The EA has a Recovery function that increases the order size if the previous trade was closed with a loss. You can use from 1 to 3 orders at the same time. The Expert Advisor fully complies with the indicator signals and FIFO rules, does not use the grid function, which allows you to start trading with a minimum deposit of \$1
295 USD
AW Gold Trend Trading EA MT5
Alexander Nechaev
Fully automated trend advisor with an active strategy and an advanced averaging system. Orders are opened using a trend filter using oscillators for greater signal security. Has a simple and intuitive setting. The Expert Advisor is suitable for use on any instrument and timeframe. Troubleshooting, appeal to the author - > https://www.mql5.com/en/blogs/post/741436 MT4 version -> https://www.mql5.com/en/market/product/56647 Input variables: MAIN SETTING Size of the order -
295 USD
NorthEastWay MT5
PAVEL UDOVICHENKO
ONLY 6 COPIES OUT OF 10 LEFT AT \$ 2738 ! After that, the price will be raised to \$3419. My telegram channel: https://t.me/new_signals My telegram contact: https://t.me/paveludo My WeChat ID: PavelUdo NorthEastWay MT5 it is a fully automated “pullback” trading system, which is especially effective in trading on popular “ pullback ” currency pairs: AUDCAD, AUDNZD, NZDCAD. The system uses the main patterns of the Forex market in trading - the return of the price after a sharp movement in any
2 738 USD
PZ Averaging EA MT5
Arturo Lopez Perez
This EA turns your losing trades into winners using averaging. Once your initial trade moves into negative territory, the recovery mechanism will kick in and place consecutive market orders in the same direction at fixed price intervals, all of which will be closed with a combined profit or approximately breakeven. This mechanism is compatible with NFA/FIFO rules and accepted by US Brokers. [ User Guide | Installation Guide | Update Guide | Troubleshooting | FAQ | All Products ] Features Tr
299 USD
Red Hawk MT5
Profalgo Limited
ONLY 1 COPY LEFT AT 349\$ NEW PRICE: 399\$ Red Hawk is a "mean reversion" trading system, that trades during the quiet times of the market. It runs on 9 pairs currently: EURUSD, GBPUSD, USDCHF, EURCHF, EURGBP, AUCAD, AUDJPY, EURAUD and USDCAD. Recommended timeframe: M5 Recommended for HEDGING MT5 account types only! Since this kind of strategy works best with low spread and fast execution, I advise using an good ECN broker. Recommended broker: ICMarkets.com or Alpari.com Recommended VP
349 USD
Naragot Portfolio MT5
Alexander Mordashov
Portfolio of trend-following multicurrency trading systems based on principles of volatility breakout and breakouts of support/resistance levels. Consists of 7 strategies chosen from a pool of many other which I trade myself. PROMO! Naragot VPS Telegram Monitor for free for a review on Naragot Portfolio! Contact me! MT4 version : https://www.mql5.com/en/market/product/56136 Telegram group : https://t.me/naragotEA MQL Signal : https://www.mql5.com/en/signals/956440 Advantages : - I am
399 USD
Arrow Signal EA
An Expert Advisor based on the custom indicator Arrow Signal . Search for a signal at every tick. Positions are closed by an opposite signal. No trailing. There is no stop loss. There is no take profit. Has a minimum of settings Money management: Lot OR Risk - type Money management The value for "Money management" - Money management value Deviation - admissible slippage Buy Arrow code (font Wingdings) - character code from 'Wingdings' font for BUY signals Sell Arrow
300 USD
Maximus FX
Andre Chagas
- LAUNCH PRICE - LIMITED COPIES LICENSE 1 OF 5 US\$ 349.00 LICENSE 6 OF 10 US\$ 399.00 FINAL PRICE US\$ 499.00 MAXIMUS FX is a totally automated Forex trading robot, designed to work in EURUSD and USDCHF currency pairs, simultaneously. It uses an innovative strategy, based on own indicators which calculates the correlation between EUR and CHF. Each entry point is defined using advanced filters, developed specifically for this robot. When trading starts, new orders can
349 USD
More from author
Cm limit
Strategy: A grid of limit orders is placed at a specified distance (Step) from each other. Above the price, SellLimit orders are placed, below BuyLimit. The lot of the first order from the price can be specified by the parameter (Lots) or set as a percentage (RiskPercent) of the available funds. The orders that we place next have the lot size multiplied by the coefficient (K_Lot). The maximum lot size is limited by the parameter (Max_Lot). The essence of the strategy is that the price cannot
200 USD
Bancomat MT4
Bancomat The Expert Advisor does not require setting and optimization of parameters. At the same time, the advisor trades on 20 currency pairs. The principle of trading is hedging (transaction insurance). Thanks to this principle, the EA does not go into large drawdowns. Regulation of profitability is achieved by changing the lot. The minimum recommended deposit is from 500 usd with a lot of 0.01. At the same time, the expected profit will be 7-15% per month. The rest of the parameters can be
550 USD
Bancomat
The expert Advisor does not require configuration and optimization of parameters. At the same time, the expert Advisor trades on 20 currency pairs. The trading principle is hedging (insurance of transactions). Thanks to this principle, the EA does not enter large drawdowns. The regulation of profitability is achieved through changes in the lot. The minimum recommended Deposit is from 500 usd with a lot of 0.01. At the same time, the expected profit will be 7-15% per month. You can leave the oth
550 USD
Indicator panel
The panel shows 6 indicators and their signals for all timeframes. You can enable or disable various signals, entire timeframes, and individual indicators. if the alert button is pressed and all signals match, the indicator sends a message to the alert window. You can disable and enable both individual signals and the entire indicator for all timeframes, or disable individual timeframes for all indicators
FREE
Correlation for SH
Script for quickly selecting a tool with high correlation. The script is placed on the tool to which you need to select the second correlating one. Then you can change the number of bars to calculate and the timeframe. The script iterates through all available symbols in the market overview and returns the 20 most correlated ones. You can use the selected pairs to trade with THE new SH expert Advisor
FREE
Best Correlation
A script showing the correlation for three selected zones at once (TF-bar) In the parameters, select 3 options The data is displayed sorted by the first zone The script also creates a file in which it writes all the data without sorting. In this way, you can expand the number of zones to any number and perform the analysis in exsel Parameters: TF_Trade1 =PERIOD_M5; BarsCor1 = 300; TF_Trade2 =PERIOD_M5; BarsCor2 = 800; TF_Trade3 =PERIOD_M5; BarsCor3 = 2000; K = 0.8; WindSize
FREE
Hourglass
Logarithmic Network - cm-hourglass Expert Advisor The Expert Advisor places orders with a decreasing lot and step in the direction of the trend and with an increasing lot and step in the counter-trend direction. It sets Take Profit for every direction to avoid breakeven of the entire series. The farthest order in the direction of the trend is closed with a farthest counter-order so as to get the positive total, thus pulling the entire network to the price without letting it expand. Parameters Lo
300 USD
Offset
The indicator identifies the direction and strength of the trend. Bearish trend areas are marked with red color, bullish areas are marked with blue color. A thin blue line indicates that a bearish trend is about to end, and it is necessary to prepare for a bullish one. The strongest signals are at the points when the filled areas start expanding. The indicator has only two parameters: period - period; offset - offset. The greater the period, the more accurate the trend identification, but with a
50 USD
BreakdownLevel
The EA works for the breakthrough of the selected section of the chart. In the EA's settings, set time period in BoxTimeStart and BoxTimeEnd parameters (the first and the last hours of the check period) and this time period breakthrough length in MinBreakdown and MaxBreakdown parameters (minimum and maximum breakthrough length). Once attached to the chart, the EA analyzes the current price and starts placing orders for breakthrough of this area, if the price is higher (lower) than the price chan
250 USD
Matematik
The Expert Advisor is based on simple mathematics. The EA places two opposite-directed orders. Wherever the price goes, one order will always have a positive result, the second one will have negative. If we average it, then on the return movement of the price (only a few spreads) the averaged orders are closed and there is only profitable order left! The EA trades through its profit. Of course, the averaging positions also add profit due to MinProfit , especially if you use rebate programs to re
150 USD
ReopeningAfterTP
The idea of the Expert Advisor stems from the theory of rollbacks. We often find levels, from which the price rolls back. That trader finds such a point and places an order there. In order not to repeat the same procedures, use this Expert Advisor. The Expert Advisor opens a limit order where another order has just closed with a profit. All parameters, lot, Stop Loss, Take Profit and the direction are the same as of the closed order. Example A position is opened at 1,1234. After it closes, a l
50 USD
Copy
This is a simple and reliable trade copier. The EA is designed to copy trades from one or several accounts to one or several accounts. It is easy to use. You do not have to specify paths to files etc. Just specify number of the account to copy trades from and attach the same EA to this account. As you have understood, you need to attach one EA to both accounts. But the Account parameter of the EA, where trades will be copied to, must be set to the account number(s) from which these trades are co
50 USD
Fishing PRO
Советник имеет несколько режимов торговли. Режимы открытия позиций и отложенных ордеров. Открытие позиций через заданный шаг. После прохождения ценой заданного расстояния 1 шаг вверх — продает, 1 шаг вниз — покупает. Таким образом, появляется сеть, которую Вы закрываете руками с помощью кнопок советника или отдаете прибыль на усмотрение самого советника, нажав кнопки автоторговли. Открытие сети отложенных ордеров Ордера открываются в зависимости от настроек. Можно задавать любые сочетания из bu
250 USD
SmartHedge
The EA compares two instruments and opens opposite positions when these instruments diverge. Further, when the instruments converge, the EA closes positions on the total profit. The EA works on any instruments. Select couples can be on the table. Advisor enough to put on one pair of bundles. For example, for trading on GBPUSD — EURUSD, it is enough to put the EA on GBPUSD or EURUSD. On the chart, the EA draws both currencies and the bars indicate the distance between them. The parameters of th
500 USD
The EA opens trades using the indicator's graphical arrows. If the indicator needle does not have a binding buffers, it is possible to test this indicator with the help of dagnogo adviser. Specify the arrow codes in the parameters and the EA will trade on them. You can find out the arrow codes by opening the arrow properties. following values are set by default: 225; //Up arrow code 226; //Down arrow code
100 USD
Rul PRO
The main task of the expert Advisor is to reduce the drawdown and close all orders that are opened by other expert advisors or manually. But it also added the functions of regular trading, so now the expert Advisor can be used as the main tool for earning money. The principle behind its strategy: All the actions performed by the Expert Advisor are controlled by built-in trend indicators, but, as practice has shown, many users disable these indicators to accelerate a process of “settlement” (
250 USD
Rul Hedge MT4
Description of strategy: The EA trades on 2 pairs with a positive correlation. On one, he trades only for buying, on the second only for selling. If the position goes to a loss, the adviser begins to resolve it by opening deals much smaller in volume than the original one and biting off small pieces on price rollbacks. The opposite trade, which is in the black, will not be closed until the unprofitable one is resolved or until they reach the specified profit in total. Clearing (averaging) trade
300 USD
Manual Grid CM
The expert Advisor helps you set a network of pending orders and collect profit from any price movement. You can use it to trade many grid strategies. You can also use it to track open positions. "Buy Stop — - open a network of pending stop orders for sale "Sell Stop" - open a network of pending stop orders for purchase "Buy Limit" - open a network of pending limit orders for sale "Sell Limit" - open a network of pending limit orders for purchase "Close Buy" - button for closing the entire ne
250 USD
Velosity
The Expert Advisor analyzes the rate of price change and opens positions in the direction of price movement when the speed sign changes. Then it accompanies the positions with a trawl, which also depends on the speed. That is, when the price growth SL SL is pressed closer, when the speed increases, respectively, further, allowing the price to gain weight and prevents from closing when the market noise. tp also moves higher when the speed changes. If necessary, you can enable the transfer fu
350 USD
RUL simple virtual lock
Description of the Expert Advisor: You can trade with any strategies and any Expert Advisors, but there comes a time when trading comes to a standstill. All dogmas and rules are violated and you do not know what to do next. My hands drop and I want to take a break, but there are several thousand dollars at stake, which is so insulting to leave to the mercy of fate. You can of course just put a lock, go on vacation and then calmly sort everything out, and you can entrust all this to the advise
75 USD
Expert NEWS
The Expert Advisor trades on the jumps of the market, and does not use any indicators. The idea of the Expert Advisor is that stop orders move discretely in time on a set distance from the current price. If the price moves sharply to one side, the Expert Advisor simply does not manage to replace the order, which becomes a market order as a result. At that the opposite order is removed. Then the trailing stop of the order is turned on. The Expert Advisor also includes a feature of placing protect
75 USD
MartinNevalyshka
The "tilting doll" Expert Advisor that increases the volume after closing an unprofitable position. It can work with deals performed manually or with initial stop orders (if StartStopOrders = true). The Expert Advisor sleeps until there are no open positions. If StartStopOrders is set to True, it places two stop orders. Once one of them triggers, the other one is deleted. If the Expert Advisor detects an open position, it sets a take profit. If the price goes in profitable direction, it turns on
100 USD
New Smarthedge
The EA has the same trading principle as SMARTHEDGE . We trade simultaneously on drum instruments with high correlation. Transactions made on one instrument compensate for the drawdown of transactions on the other instrument. Thus, we can trade fairly large volumes with relatively low risk. The principle of operation is clearly visible in the video below. In this expert Advisor, I tried to simplify the settings as much as possible, but left all the main functions from the early development.
500 USD
Rul MT5
The expert Advisor is designed for dealing with complex situations, including Loka. In addition, the expert Advisor can successfully trade itself. To do this, it provides auto-trading functions. Parameters BUY – allow to resolve sales SELL – allow to resolve purchases Step = 60; – step between averaging positions ProfitClose – closing profit in currency Lot = 0.01; – the first lot of averaging K_Lot = 1.5; – averaging coefficient Max_Lot = 10.0; – maximum possible volume Sta
120 USD
Rul HEDGE
Description of strategy: The EA trades on 2 pairs with a positive correlation. On one, he trades only for buying, on the second only for selling. If the position goes to a loss, the adviser begins to resolve it by opening deals much smaller in volume than the original one and biting off small pieces on price rollbacks. The opposite trade, which is in the black, will not be closed until the unprofitable one is resolved or until they reach the specified profit in total. Clearing (averaging) trade
300 USD
Matematiks
Matematiks The expert Advisor is based on simple math. The expert Advisor puts 2 multidirectional orders further, wherever the price goes, it turns out that one order is always in the plus, the second in the minus. If we average it, then on the reverse price movement (only a few spreads), the averaged orders are closed and only the profitable one remains! It is thanks to his profit that the trade goes on. Of course, the averaging positions themselves also add profit to the piggy Bank at the e
150 USD
We choose instruments with high correlation for trading. It is necessary that all the tools in the group have a correlation higher than 0.7. To select such tools, I specially made a free BEST CORRELATION script, which can also be downloaded on the market (for free). The Expert Advisor goes through the list and finds the instruments that have deviated by the maximum distance up and down from the starting point and makes counter trades on them. The Expert Advisor sells (sell) at the highest of al
250 USD
RUL simple virtual lock MT5 | 11,433 | 48,675 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.578125 | 3 | CC-MAIN-2021-17 | latest | en | 0.874441 |
https://www.esaral.com/q/if-i-82072 | 1,720,943,845,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514551.8/warc/CC-MAIN-20240714063458-20240714093458-00830.warc.gz | 681,295,743 | 11,513 | # if I =
Question:
If $I=\int_{1}^{2} \frac{d x}{\sqrt{2 x^{3}-9 x^{2}+12 x+4}}$, then:
1. (1) $\frac{1}{8} 2. (2)$\frac{1}{9}
3. (3) $\frac{1}{16} 4. (4)$\frac{1}{6}
Correct Option: , 2
Solution:
$f(x)=\frac{1}{\sqrt{2 x^{3}-9 x^{2}+12 x+4}}$
$f^{\prime}(x)=\frac{-1}{2}\left(\frac{\left(6 x^{2}-18 x+12\right)}{\left(2 x^{3}-9 x^{2}-12 x+4\right)^{3 / 2}}\right)$
$=\frac{-6(x-1)(x-2)}{2\left(2 x^{3}-9 x^{2}+12 x+4\right)^{3 / 2}}$
$f(1)=\frac{1}{3}$ and $f(2)=\frac{1}{\sqrt{8}}$
It is increasing function
$\frac{1}{3}$\frac{1}{9} | 285 | 546 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.125 | 4 | CC-MAIN-2024-30 | latest | en | 0.310053 |
http://www.indiatimes.com/science/worlds-hardest-sudoku-up-for-the-challenge-29801.html | 1,480,853,854,000,000,000 | text/html | crawl-data/CC-MAIN-2016-50/segments/1480698541321.31/warc/CC-MAIN-20161202170901-00123-ip-10-31-129-80.ec2.internal.warc.gz | 531,118,218 | 14,592 | X
# Can you solve the world's hardest Sudoku?
For all of you, who have been solving mediocre level sudokus and bragging about your genius levels to your friends and anyone within earshot, just two words-Solve This.
Devised by the Finnish mathematician Arto Inkala, and published on his website, the puzzle is claimed to be unsolvable to all but the sharpest minds.
In existence since the 19th century and later popularized by the Japanese puzzle company Nikoli, Sudoku's literal meaning is single number.
But it gained widespread popularity only in the mid-2000's, after newspapers around the world decided that crosswords weren't enough to challenge their readers's intellect.
But we should be thankful to them, as the aforementioned crosswords, though very erudite and intellectual looking in print, were beyond the interest and capability of many.
But Sudoku ushered in an era of democracy on the leisure/entertainment page, where anyone could take a shot at the black and white grid filled with numbers and take back bloated egos in return.
After trying out initially with the 'one-starred-kiddy-level' puzzles, we dared and moved on to reach for more stars.
And then we took it with us everywhere. Thus the popular versions in puzzle books, mobile phone games and even apps.
On the difficulty scale by which most sudoku grids are graded, with one star signifying the simplest and five stars the hardest, this puzzle would score an eleven, Inkala explained.
According to The Telegraph, Mr Inkala said the most difficult parts of the grid require you to think ten moves ahead, exploring a series of permutations at each stage in order to eliminate all routes other than the right one.
He added: "It is difficult to say if any one is the hardest or not, because I believe the hardest one is not yet discovered.
Well, so what are you waiting for? Solve away and do not forget to gloat afterwards, if you end up being successful that is.
And of course we want you to try real hard, so here is the link for the solution, in case you just give up.
Read the full story at World's hardest sudoku: can you crack it? | 451 | 2,125 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.703125 | 3 | CC-MAIN-2016-50 | latest | en | 0.966202 |
https://ask.cvxr.com/t/permutation-like-matrix-variable/12799 | 1,718,495,524,000,000,000 | text/html | crawl-data/CC-MAIN-2024-26/segments/1718198861618.0/warc/CC-MAIN-20240615221637-20240616011637-00101.warc.gz | 92,368,576 | 5,029 | # Permutation like matrix variable
I have a matrix A \in \mathbb{R}^{m \times n} where m>n. I want to create a variable matrix variable P that is comprised of the canonical unit vectors related to the rows I want to extract from A to form A'(see below). Is there a way to define this matrix variable P in CVX?
Example of extracting columns 2,3
n = 3;
m = 5;
A = randn(m,n);
I = eye(m);
e2 = I(:,2);
e3 = I(:,3);
P = [e2, e3];
Aprime = P.' * A;
Of course, this is not a complete optimization problem, but this is a necessary building block for what I am working on that I am wondering if it can be done in CVX
It’s not clear to me what is an optimization variable and what is input data.
You say P is a variable, but it sounds like you want to consider it to be input data. If so, and if A is a CVX (optimization variable), then P.' * A is an affine CVX expression of the variable A, and can be used according to standard CVX rules. Because P.' * A is formed all at once, rather than through indexing, it need not be declared as an expression in CVX.
Note, writing P.'*A rather than P'*A just seems unnecessarily confusing,. because the . before the ' does nothing, rather than making it into an element-wise (Hadamard) multiplication.
if you mean something else, you will have to clarify. And of course it needs to be a convex optimization problem (or perhaps Mixed-Integer convex)
If you want the rows to be selected per the value of some integer variables, I think you’ll need some kind of Big M modeling to select those rows.
In my case, I eventually want to form G = \left(A'\right)^T A' and minimize G's condition number. However, I understand this is a quasiconvex function of G and can’t be handled directly in CVX. I have a way to address this using the bisection method.
The focus of my question is can I formulate P as a variable in CVX? My intuitive idea is that I want to extract the rows of A that minimize the condition number of G. So P needs to be a matrix that is like a permutation matrix, but not square.
As for .', this is simply a habit of mine to use MATLAB transpose and not conjugate transpose.
I still don’t know what is a CVX variable and what is input data. If A is input data and the contents of P are determined by a CVX integer variable, i.e., you want to index into A to select rows according to some integer optimization variable, I think you will need some kind of Big M modeling.
I found that this paper is related. I think this will get me going for now!
This is a sensor selection problem, which is a non-convex problem. However, we can approximate it with a convex relaxation. The part pertaining to my question is they use
\sum_{i=1}^m z_i a_i a_i^T
where a_i^T is a row vector with length n and z_i\in\mathbb{R}^n is a boolean/binary variable to pick out rows (z_i=1) or exclude rows (z_i=0) of some matrix A = \left[a_1, \cdots, a_m \right]^T. | 746 | 2,902 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.28125 | 3 | CC-MAIN-2024-26 | latest | en | 0.917601 |
https://www.physicsforums.com/threads/proton-collision-problem.543229/ | 1,511,062,797,000,000,000 | text/html | crawl-data/CC-MAIN-2017-47/segments/1510934805265.10/warc/CC-MAIN-20171119023719-20171119043719-00329.warc.gz | 871,362,715 | 16,852 | Proton collision problem
1. Oct 23, 2011
loganblacke
1. The problem statement, all variables and given/known data
A proton with a speed of 1.23x104 m/s is moving from ∞ directly towards another proton. Assuming that the second proton is fixed in place, find the position where the moving proton stops momentarily before turning around.
2. Relevant equations
Potential Energy - U/q = V
3. The attempt at a solution
Clueless.
2. Oct 23, 2011
Delphi51
The proton is moving initially so it has kinetic energy.
You must think about what energy change takes place as it is repelled by the other proton. The calculation itself will be easy.
3. Oct 23, 2011
cmb
:surprised ...in a little proton vice??
4. Oct 23, 2011
loganblacke
Okay so conservation of energy, I get that.
Ek = .5(1.67x10-27kg)(1.23x104m/s)2
When the proton stops, Ek = 0, I'm not understanding how this relates to the position of the proton when it stops.
5. Oct 23, 2011
Delphi51
It lost kinetic energy. But energy is conserved; it must have changed into some other kind of energy. It is the energy due to electric charges being near each other and is called electric potential energy. It depends on the distance between charges.
Write "initial kinetic energy = electric potential energy when stopped"
Put in the detailed formulas for both types of energy. Wikipedia has the potential energy one here: http://en.wikipedia.org/wiki/Electric_potential_energy
Then you can solve for the distance between charges.
6. Oct 23, 2011
loganblacke
I really really appreciate your help with this but it appears as if there are 50 equations for electric potential energy. I'm assuming the initial kinetic energy would be the 1/2mv2, however it isn't obvious to me which equation to use for the electric potential energy..
7. Oct 23, 2011
loganblacke
I tried using 1/2mv2 = q*(1/4∏ε0)(Q/r)... and then solving for r but the answer I ended up with was wrong.
8. Oct 23, 2011
Delphi51
It is the first equation in the Wikipedia article (link in previous post).
Your choice of the form with k or the form with 1/(4∏ε0).
9. Oct 24, 2011
loganblacke
I tried using both equations and I'm still getting the wrong answer. Here is what I got..
(1/(4∏*(8.854*10-12)))*((Q1*Q2)/r)=.5(1.67*10-27)(1.23*104)2
Q1 and Q2 would be the charge of each proton which I think is 1.6*10-19
If i solve for r, I get 4.92*1018.
Another website said that the charge of a proton is e, so I tired that as well and got 1.89*10-30.
10. Oct 24, 2011
Delphi51
The elementary charge IS 1.6 x 10^-19, so you should not have got a different answer that way!
You know, it is much easier to solve the equation before putting the numbers in - just easier to manipulate letters than lengthy numbers.
k*e²/R = ½m⋅v²
R = 2k*e²/(m⋅v²)
Putting in the numbers at this stage, I get an answer in nanometers.
11. Oct 24, 2011
loganblacke
Finally got it right, 1.821 x 10-9
Apparently e and e on the graphing calculator are two very different things!
12. Oct 24, 2011
Delphi51
Looks good! Yes, the e on the calculator is 2.718, the base of the natural logarithm. | 881 | 3,108 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.6875 | 4 | CC-MAIN-2017-47 | longest | en | 0.935581 |
http://annealmasy.com/pdf/algebraic-structures-of-symmetric-domains | 1,558,550,766,000,000,000 | text/html | crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00422.warc.gz | 12,321,655 | 9,094 | # Algebraic Structures of Symmetric Domains by Ichiro Satake
By Ichiro Satake
This publication is a finished remedy of the overall (algebraic) thought of symmetric domains.
Originally released in 1981.
The Princeton Legacy Library makes use of the most recent print-on-demand expertise to back make on hand formerly out-of-print books from the celebrated backlist of Princeton collage Press. those variations guard the unique texts of those vital books whereas proposing them in sturdy paperback and hardcover variations. The objective of the Princeton Legacy Library is to drastically raise entry to the wealthy scholarly history present in the millions of books released by means of Princeton college Press because its founding in 1905.
Similar linear books
Mengentheoretische Topologie
Eine verständliche und vollständige Einführung in die Mengentheoretische Topologie, die als Begleittext zu einer Vorlesung, aber auch zum Selbststudium für Studenten ab dem three. Semester bestens geeignet ist. Zahlreiche Aufgaben ermöglichen ein systematisches Erlernen des Stoffes, wobei Lösungshinweise bzw.
Combinatorial and Graph-Theoretical Problems in Linear Algebra
This IMA quantity in arithmetic and its functions COMBINATORIAL AND GRAPH-THEORETICAL difficulties IN LINEAR ALGEBRA is predicated at the lawsuits of a workshop that was once a vital part of the 1991-92 IMA software on "Applied Linear Algebra. " we're thankful to Richard Brualdi, George Cybenko, Alan George, Gene Golub, Mitchell Luskin, and Paul Van Dooren for making plans and enforcing the year-long application.
Linear Algebra and Matrix Theory
This revision of a well known textual content contains extra subtle mathematical fabric. a brand new part on purposes presents an creation to the trendy remedy of calculus of numerous variables, and the concept that of duality gets elevated assurance. Notations were replaced to correspond to extra present utilization.
Extra info for Algebraic Structures of Symmetric Domains
Example text
3) In particular, 113 1 is a Lie subalgebra, identical to gf(V). Also, for Aegf(V)=ll3, and be V=ll3o, one has [A, b]=Ab. Now let ( V, { } ) be a non-degenerate JTS. For be V, we set p,(x) = {x, b, x) (=P(x)b) (7. P, Ib e VJ c 113,. @_, (7. 5) @o @, We write (a, T, b) for a+ T + p, (a, b E V, Te VD V). ing Proposition 7. 1 (Koecher). X = (a, Then we obtain the follow- 1) @( V, { } ) is a (graded) Lie subalgebra efll3. T, b) and X' = (a', T', b') e @(V, { } ), one has (7. 6) [X, X'] = (Ta' -T'a, 2a'ob+[T, T']-2aob', T'*b-T*b').
Let G be the complex analytic subgroup of GLn( C) corresponding to the § 4. Cartan involutions of reductive R-groups 13 complexification g=gc. Then G is (analytically) reductive and the abelian part (ja is isomorphic to a C-torus. Hence, by Proposition 3. 5, G has a (uniquely determined) C-group structure. , G0, q. e. d. Remark. Without the compactness assumption on Gg, Proposition 3. 6 is false. For instance, let G0 =R~cGL,(R) =Rx. Then, for aeR, the analytic endomorphism e'H-eat ofG0 is extendible to an R-endomorphism of Rx if and only if a is an integer.
3) Chapter I. J). For a homogeneous cone, the converse of this is also true. For later use, we state this in a slightly more general form. Lemma 8. 3. J) which is transitive on Q and self-adjoint. J) 0 • By Lemmas 8. 1, 8. J. The proof of G,=G(Q) 0 will be given later (p. 33). Thus Q is self-dual. Proposition 8. 4 ( Vinberg). Let Q be a homogeneous cone in U. Then there exists an R-group Gin GL(U) such that G0 cG(fJ)cG. J) is conjugate to K. Proof. J)}. J). J. J) 0 x0 = G(Q) 0 Then, since (gx0 ). | 1,024 | 3,617 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.75 | 3 | CC-MAIN-2019-22 | latest | en | 0.716252 |
https://math.stackexchange.com/questions/3812697/intuition-for-getting-multiplicative-inverse-of-a-dedekind-cut | 1,721,439,271,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514972.64/warc/CC-MAIN-20240719231533-20240720021533-00354.warc.gz | 345,811,195 | 36,167 | # Intuition for getting multiplicative inverse of a Dedekind cut
By a Dedekind cut, we mean, an ordered pair $$(L,U)$$ of subsets of $$\mathbb{Q}$$ such that they are disjoint, their union is $$\mathbb{Q}$$, and
• Each member of $$L$$ is smaller than each member of $$U$$
• $$L$$ contains no largest rational number.
Let $$(L,U)$$ be a Dedekind cut for which $$L$$ contains some positive rationals.
Let $$L'$$ be the collection of non-positive rationals, along with those positive rationals $$x$$ whose product with all positive rationals of $$L$$ is $$<1$$. Let $$U'$$ be the complement of $$L'$$ in $$\mathbb{Q}$$.
Can we say that $$(L',U')$$ defined in this way is the multiplicative inverse of $$(L,U)$$?
(One may see this wiki-link for product of Dedekind cuts).
Yes. One needs to check whether $$(L,U)\times(L',U')=(L_1,U_1)$$, where $$L_1=(-\infty,1)$$, $$U_1=[1,\infty)$$. Note also that only the multiplication of the lower sets needs be considered, since the uppers sets are their complements.
By the definition of multiplication of Dedekind cuts, since $$L$$ and $$L'$$ contain positive rationals, the lower set of their product is defined to contain
• all non-positive rationals
• all positive rationals $$ab$$ where $$a\in L$$, $$b\in L'$$.
Take any rational $$c<1$$. Pick a rational $$a\in L$$ such that $$a>cL$$; this is possible since $$c<1$$. So for any $$b\in L$$, $$(c/a)b<1$$, which implies $$c/a\in L'$$ and so $$c=(c/a)a\in L'\times L$$. This shows that $$(-\infty,1)\subseteq L$$.
Take any rational $$c\ge1$$. If $$c=ab$$ with $$a\in L$$, $$b\in L'$$, it would contradict the way $$L'$$ is defined. So $$L=(-\infty,1)$$.
Actually, you need to be slightly more careful than that.
Suppose the cut (L,U) is divided by a rational value $$f$$ (i.e U has $$f$$ as a minimum value). Then for every positive rational $$r$$ in L, $$f^{-1} r$$ < 1, so $$f^{-1}$$ is in L' by definition. It is easy to see that $$f^{-1}$$ is the maximum value in L', so (L',u') doesn't quite meet the given definition of a cut. | 632 | 2,036 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 46, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.796875 | 4 | CC-MAIN-2024-30 | latest | en | 0.87359 |
https://gpuzzles.com/mind-teasers/full-like-empty-riddle/ | 1,493,026,122,000,000,000 | text/html | crawl-data/CC-MAIN-2017-17/segments/1492917119225.38/warc/CC-MAIN-20170423031159-00350-ip-10-145-167-34.ec2.internal.warc.gz | 793,316,499 | 11,031 | • Views : 60k+
• Sol Viewed : 20k+
# Mind Teasers : Full Like Empty Riddle
Difficulty Popularity
Why is it that a room which is full of married people like an empty one?
Discussion
Suggestions
• Views : 60k+
• Sol Viewed : 20k+
# Mind Teasers : Hard Logic Statement Riddle
Difficulty Popularity
There are a hundred statements.
1st person says: At least one of the statements is incorrect.
2nd person says: At least two of the statements is incorrect.
3rd person says: At least three of the statements are incorrect.
4th person says: At least four of the statements are incorrect.
..
..
..
100th person says: At least a hundred of the statements are incorrect.
Now analyze all the statements and find out how many of them are incorrect and how many are true?
• Views : 70k+
• Sol Viewed : 20k+
# Mind Teasers : Relation Sorcery Riddle
Difficulty Popularity
The father of a driver's son is sitting with the son of the driver without the driver actually being in the car.
What sorcery is this?
• Views : 70k+
• Sol Viewed : 20k+
# Mind Teasers : Password Brain Teaser
Difficulty Popularity
Angelina Jolie is trapped in a tomb. The only way to escape is figure out a 13 character password. Following are the five clues that are available to her.
Precisely two of the below statements are false.
The password is confined within this sentence.
The password is not in this hint.
The password is inside only one of these statements.
At least one of the above statement is a lie.
• Views : 40k+
• Sol Viewed : 10k+
# Mind Teasers : 100 People Circle Puzzle
Difficulty Popularity
100 cowboys are standing in a circle and are numbered from number 1 to 100. A deadly game is created in which the first person will shoot the next person (i.e.second person) and then need to pass the gun to the next person (i.e. Third person) and he will shoot the person to the next. This game will continue until one cowboy stays alive.
Which cowboy will survive at the end?
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Playing Cards Logic Riddle
Difficulty Popularity
Dr. Watson has a new card puzzle for Sherlock Holmes. He picks four cards out of the pack of 52 cards and lays them face down on the table. He offers four hints to Sherlock.
1) The left card can’t be greater than the one on right.
2) The difference between the first card and third card equals to eight.
3) None of the Ace is present
4) No face card has been included i.e. no queen king and jacks.
5) The difference between the second and fourth card is 7.
Sherlock smiles and tells all the four cards to Dr. Watson.
Can you tell the four cards?
• Views : 50k+
• Sol Viewed : 20k+
# Mind Teasers : Trick Gift Brain Teaser
Difficulty Popularity
Cindy opened twenty five presents.
Duke opened five presents.
John opened fifteen presents.
Judging by the statements, can you decipher how many presents will be opened by Rhea?
• Views : 70k+
• Sol Viewed : 20k+
# Mind Teasers : Game Of Dice Brain Teaser
Difficulty Popularity
A unique solo game of dice is being played. Two dices are thrown in each turn and the scores are taken by multiplying the numbers obtained.
Now talking about a particular game, here are the facts:
1) The score for the second roll is five more than the score for the first roll.
2) The score for the third roll is six less than the score for the second roll.
3) The score for the fourth roll is eleven more than the score for the third roll.
4) The score for the fifth roll is eight less than the score for the fourth roll.
Reading the above facts, can you tell the score for each of the five throws?
• Views : 40k+
• Sol Viewed : 10k+
# Mind Teasers : Beer Cap Rebus Riddle
Difficulty Popularity
What does the beer cap say?
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Box Ball Logic Problem
Difficulty Popularity
There are three bags.The first bag has two blue rocks. The second bag has two red rocks. The third bag has a blue and a red rock. All bags are labeled but all labels are wrong.You are allowed to open one bag, pick one rock at random, see its color and put it back into the bag, without seeing the color of the other rock.
How many such operations are necessary to correctly label the bags ?
• Views : 80k+
• Sol Viewed : 20k+
# Mind Teasers : Cake Grandma bridge Logic Problem
Difficulty Popularity
Rohit is on his way to visit your Grandma, who lives at the end of the state.It's her birthday, and he want to give her the cakes that he has made.Between his place and her grandma house, he need to cross 7 toll bridges.
Before you can cross the toll bridge, you need to give them half of the cakes you are carrying, but as they are kind trolls, they each give you back a single cake.
How many cakes do rohit have to carry with him so he can reach his grandma home with exactly 2 cakes?
### Latest Puzzles
24 April
##### The Pacific Quick Riddle
Balboa discovered The Pacific in 1513.
23 April
##### Happy Diwali Puzzle
Can you replace the each alphabet with t...
22 April
##### Funny Ice Cream School Riddle
In Which school do you learn the art to ...
21 April
##### Make Number 7 Even Riddle
Without the use of any mathematical oper...
20 April
##### Numbers Relationship Sequence Puzzle
Can you replace the question mark with t...
19 April
##### Race Langoor Monkey And Eagle
An Indian Langoor, an African Monkey, an...
18 April
##### Which number replaces the question mark puzzle
Which number replaces the question mark ... | 1,323 | 5,488 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2017-17 | longest | en | 0.930355 |
https://www.hackmath.net/en/math-problem/33611 | 1,603,967,004,000,000,000 | text/html | crawl-data/CC-MAIN-2020-45/segments/1603107904039.84/warc/CC-MAIN-20201029095029-20201029125029-00247.warc.gz | 727,208,557 | 12,279 | # Missing term 2
What is the missing term for the Geometric Progression (GP) 3, 15, 75,__, 1875?
Correct result:
a4 = 375
#### Solution:
We would be pleased if you find an error in the word problem, spelling mistakes, or inaccuracies and send it to us. Thank you!
## Next similar math problems:
• Here is
Here is a data set (n=117) that has been sorted. 10.4 12.2 14.3 15.3 17.1 17.8 18 18.6 19.1 19.9 19.9 20.3 20.6 20.7 20.7 21.2 21.3 22 22.1 22.3 22.8 23 23 23.1 23.5 24.1 24.1 24.4 24.5 24.8 24.9 25.4 25.4 25.5 25.7 25.9 26 26.1 26.2 26.7 26.8 27.5 27.6 2
• Profitable bank deposit 2012
Calculate the value of what money lose creditor with a deposit € 9500 for 4 years if the entire duration are interest 2.6% p.a. and tax on interest is 19% and annual inflation is 3.7% (Calculate what you will lose if you leave money lying idle at negative
• Hot air balloon
Hot air balloon ascends 25 meters up for a minute after launch. Every minute ascends 75 percent of the height which climbed in the previous minute. a) how many meters ascends six minutes after takeoff? b) what is the overall height 10 minutes after launch
• Complaints
The table is given: days complaints 0-4 2 5-9 4 10-14 8 15-19 6 20-24 4 25-29 3 30-34 3 1.1 What percentage of complaints were resolved within 2weeks? 1.2 calculate the mean number of days to resolve these complaints. 1.3 calculate the modal number of day
• Rate or interest
At what rate percent will Rs.2000 amount to Rs.2315.25 in 3 years at compound interest?
• Progression
-12, 60, -300,1500 need next 2 numbers of pattern
• Hiking trip
Rosie went on a hiking trip. The first day she walked 18kilometers. Each day since she walked 90 percent of what she walked the day before. What is the total distance Rosie has traveled by the end of the 10th day? Round your final answer to the nearest ki
• What is 10
What is the 5th term, if the 8th term is 80 and common ratio r =1/2?
• Ray
Light ray loses 1/19 of brightness passing through glass plate. What is the brightness of the ray after passing through 7 identical plates?
• Future of libraries
You know that thanks to the Internet, electronic communications and developing online resources on the Internet annually declining number of traditional readership by 17%. These deal is irreversible evolutionary shift from old books from libraries to imme
• The city
At the end of 2010 the city had 248000 residents. The population increased by 2.5% each year. What is the population at the end of 2013?
• Savings
The depositor regularly wants to invest the same amount of money in the financial institution at the beginning of the year and wants to save 10,000 euros at the end of the tenth year. What amount should he deposit if the annual interest rate for the annua
• Geometric progression
Fill 4 numbers between 4 and -12500 to form geometric progression.
• Loan
Apply for a \$ 59000 loan, the loan repayment period is 8 years, the interest rate 7%. How much should I pay for every month (or every year if paid yearly). Example is for practise geometric progression and/or periodic payment for an annuity.
• Precious metals
In 2006-2009, the value of precious metals changed rapidly. The data in the following table represent the total rate of return (in percentage) for platinum, gold, an silver from 2006 through 2009: Year Platinum Gold Silver 2009 62.7 25.0 56.8 2008 -41.3 4
• Geometric progressiob
If the sum of four consective terms of geometric progression is 80 and arithmetic mean of second and fourth term is 30 then find terms?
• Sum of series
Determine the 6-th member and the sum of a geometric series: 5-4/1+16/5-64/25+256/125-1024/625+.... | 1,071 | 3,650 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.0625 | 4 | CC-MAIN-2020-45 | longest | en | 0.904489 |
https://smart-answers.com/mathematics/question4136665 | 1,627,215,172,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046151672.96/warc/CC-MAIN-20210725111913-20210725141913-00715.warc.gz | 539,411,109 | 34,657 | , 12.11.2019 09:31, msprincessswag6553
# To mix his special energy drink, jerome uses 8 cups of water and 3 cups of drink mix. what is the ratio of water to mixed energy drink?
### Other questions on the subject: Mathematics
Mathematics, 20.06.2019 18:04, patrick171888
The table shows the cost of several bunches of bananas. what equation can be used to represent the cost c of a bunch that weights p pounds?
Mathematics, 21.06.2019 16:30, Vells2246
Bethany‘s mother was upset with her because bethany's text messages from the previous month or 280% of the amount allowed and no extra cost under her phone plan and mother had to pay for each text messages over the allowance bethany had 5.450 text messages last month how many test pages is she allowed on her phone plan at no extra cost
Mathematics, 21.06.2019 22:30, wednesdayA
Ineed big ! the cost of a school banquet is $25 plus$15 for each person attending. create a table, sketch the graph, and write an equation in slope-intercept and point-slope form that gives total cost as a function of the number of people attending. what is the cost for 77 people? | 295 | 1,115 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.03125 | 3 | CC-MAIN-2021-31 | latest | en | 0.92714 |
https://www.jiskha.com/display.cgi?id=1423022101 | 1,511,496,082,000,000,000 | text/html | crawl-data/CC-MAIN-2017-47/segments/1510934807084.8/warc/CC-MAIN-20171124031941-20171124051941-00061.warc.gz | 826,384,306 | 3,909 | # math
posted by .
Lucy has 40 bean plants,32 tomato plants,and16 peppers plants.She wants to put the plants in rows with only one type of plant in each row.All rows will have the same number of plants.How many plants can Lucyput in each row
• math -
What is the greatest common factor of 40, 32, and 16?
• math -
Divide the 3 plants by 2 and you will get 44rows
## Similar Questions
1. ### Math
Jill wants to put 45 sunflower plants, 81 corn plants, and 63 tomato plants in her garden. if she puts the same number of plants in each row and if each row has one type of plant, what is the greatest number of plants Jill can put …
2. ### Math
If Angel plants the same number of pepper plant in each of 5 rows, he will have none left over. If he plants one in the first row and them one more in each row that he did in the previous row, he will plant peppers in only 4 rows how …
writing in math mrs. bradner has 30 tomato plants. she wants to plant the same number of plants in each row of her gardner. explain how she could decide the number of rows to plants.
4. ### Math
Ms. Hernandez has 17 tomato plants that she wants to plant in rows. She will put 1 plant in some rows and 2 plants in the others. How many different ways can she plant the tomato plants?
liam is planting 30 bean plants and 45 pea plants in his garden. If he puts the same number of plants in each row and if each row has only one type of plant, what is the greatest number of plants he put in one row?
6. ### Math
Ray wanted to put his tomato plants in rows with the same numbers of plants in each row. He knew that if he planted 5,6, or 7 rows of tomatoes, he would have at least one plant left over. However, his plants could be placed evenly …
7. ### math
Ms.Hernandez has 17 tomato plants that she wants to plant in rows.She will put 2 plants in some rows and 1 plant in the others.How may different ways can she plant the tomato plants?
8. ### math problem solving
leslie has 17 tomato plants that she wants to plant in rows. she will put 2 plants in some rows and 1 plant in others. how many different ways can she plant the tomato plants?
9. ### factors
Lucy has 40 bean plants, 32 tomato plants, and 16 pepper plants. she wants to put the plant in rows with only one type of plant in each row. All rows will have the same number of plants. How many plants can Lucy put in each row
10. ### math
need help with a homework please explain how to get answer to ms hernandez has 17 tomato plants that she want to plant in rows. she will put 2 plants in some rows and 1 plant in the others. how many different ways can she plant the …
More Similar Questions | 653 | 2,649 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.5 | 4 | CC-MAIN-2017-47 | longest | en | 0.958091 |
https://dokumen.pub/deep-learning-bengio.html | 1,721,209,525,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763514759.39/warc/CC-MAIN-20240717090242-20240717120242-00838.warc.gz | 192,462,323 | 314,880 | ##### Citation preview
Deep Learning Yoshua Bengio Ian Goodfellow Aaron Courville
Contents Acknowledgments
vii
Notation
ix
1
Introduction 1.1 Who Should Read This Book? . . . . . . . . . . . . . . . . . . . . 1.2 Historical Trends in Deep Learning . . . . . . . . . . . . . . . . .
1 8 11
I
Applied Math and Machine Learning Basics
25
2
Linear Algebra 2.1 Scalars, Vectors, Matrices and Tensors . . 2.2 Multiplying Matrices and Vectors . . . . . 2.3 Identity and Inverse Matrices . . . . . . . 2.4 Linear Dependence, Span, and Rank . . . 2.5 Norms . . . . . . . . . . . . . . . . . . . . 2.6 Special Kinds of Matrices and Vectors . . 2.7 Eigendecomposition . . . . . . . . . . . . . 2.8 Singular Value Decomposition . . . . . . . 2.9 The Moore-Penrose Pseudoinverse . . . . 2.10 The Trace Operator . . . . . . . . . . . . 2.11 Determinant . . . . . . . . . . . . . . . . . 2.12 Example: Principal Components Analysis
. . . . . . . . . . . .
27 27 30 31 32 34 35 37 39 40 41 42 42
. . . . .
46 46 48 49 51 51
3
Probability and Information Theory 3.1 Why Probability? . . . . . . . . . . 3.2 Random Variables . . . . . . . . . 3.3 Probability Distributions . . . . . . 3.4 Marginal Probability . . . . . . . . 3.5 Conditional Probability . . . . . . i
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
. . . . . . . . . . . .
. . . . .
CONTENTS
3.6 3.7 3.8 3.9 3.10 3.11 3.12 3.13 3.14 3.15 4
5
II 6
The Chain Rule of Conditional Probabilities . Independence and Conditional Independence Expectation, Variance, and Covariance . . . . Information Theory . . . . . . . . . . . . . . . Common Probability Distributions . . . . . . Useful Properties of Common Functions . . . Bayes’ Rule . . . . . . . . . . . . . . . . . . . Technical Details of Continuous Variables . . Structured Probabilistic Models . . . . . . . . Example: Naive Bayes . . . . . . . . . . . . .
Numerical Computation 4.1 Overflow and Underflow . . . . 4.2 Poor Conditioning . . . . . . . 4.3 Gradient-Based Optimization . 4.4 Constrained Optimization . . . 4.5 Example: Linear Least Squares
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . . . . . . .
. . . . . . . . . .
52 52 53 54 57 62 64 64 65 68
. . . . .
74 74 75 76 85 87
Machine Learning Basics 5.1 Learning Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 Example: Linear Regression . . . . . . . . . . . . . . . . . . . . . 5.3 Generalization, Capacity, Overfitting and Underfitting . . . . . . 5.4 The No Free Lunch Theorem . . . . . . . . . . . . . . . . . . . . 5.5 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Hyperparameters, Validation Sets and Cross-Validation . . . . . 5.7 Estimators, Bias, and Variance . . . . . . . . . . . . . . . . . . . 5.8 Maximum Likelihood Estimation . . . . . . . . . . . . . . . . . . 5.9 Bayesian Statistics and Prior Probability Distributions . . . . . . 5.10 Supervised Learning . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 Unsupervised Learning . . . . . . . . . . . . . . . . . . . . . . . . 5.12 Weakly Supervised Learning . . . . . . . . . . . . . . . . . . . . . 5.13 The Curse of Dimensionality and Statistical Limitations of Local Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Modern Practical Deep Networks
89 89 97 99 104 106 108 110 118 121 128 131 134 135
147
Feedforward Deep Networks 149 6.1 From Fixed Features to Learned Features . . . . . . . . . . . . . 149 6.2 Formalizing and Generalizing Neural Networks . . . . . . . . . . 152 6.3 Parametrizing a Learned Predictor . . . . . . . . . . . . . . . . . 154 ii
CONTENTS
6.4 6.5 6.6 6.7 6.8 7
8
9
Flow Graphs and Back-Propagation . . Universal Approximation Properties and Feature / Representation Learning . . . Piecewise Linear Hidden Units . . . . . Historical Notes . . . . . . . . . . . . . .
. . . . Depth . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
Regularization 7.1 Regularization from a Bayesian Perspective . . . . . 7.2 Classical Regularization: Parameter Norm Penalty . 7.3 Classical Regularization as Constrained Optimization 7.4 Regularization and Under-Constrained Problems . . 7.5 Dataset Augmentation . . . . . . . . . . . . . . . . . 7.6 Classical Regularization as Noise Robustness . . . . 7.7 Early Stopping as a Form of Regularization . . . . . 7.8 Parameter Tying and Parameter Sharing . . . . . . . 7.9 Sparse Representations . . . . . . . . . . . . . . . . . 7.10 Bagging and Other Ensemble Methods . . . . . . . . 7.11 Dropout . . . . . . . . . . . . . . . . . . . . . . . . . 7.12 Multi-Task Learning . . . . . . . . . . . . . . . . . . 7.13 Adversarial Training . . . . . . . . . . . . . . . . . .
. . . . .
. . . . . . . . . . . . .
. . . . .
. . . . . . . . . . . . .
. . . . .
. . . . . . . . . . . . .
. . . . .
. . . . . . . . . . . . .
Optimization for Training Deep Models 8.1 Optimization for Model Training . . . . . . . . . . . . . . . 8.2 Challenges in Optimization . . . . . . . . . . . . . . . . . . 8.3 Optimization Algorithms . . . . . . . . . . . . . . . . . . . . 8.4 Approximate Natural Gradient and Second-Order Methods 8.5 Conjugate Gradients . . . . . . . . . . . . . . . . . . . . . . 8.6 BFGS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.7 Hints, Global Optimization and Curriculum Learning . . . . Convolutional Networks 9.1 The Convolution Operation . . . . . . . . . . . 9.2 Motivation . . . . . . . . . . . . . . . . . . . . . 9.3 Pooling . . . . . . . . . . . . . . . . . . . . . . 9.4 Convolution and Pooling as an Infinitely Strong 9.5 Variants of the Basic Convolution Function . . 9.6 Structured Outputs . . . . . . . . . . . . . . . . 9.7 Convolutional Modules . . . . . . . . . . . . . . 9.8 Data Types . . . . . . . . . . . . . . . . . . . . 9.9 Efficient Convolution Algorithms . . . . . . . . 9.10 Random or Unsupervised Features . . . . . . . iii
. . . . . . . . . . . . Prior . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . .
. . . . .
167 180 184 186 188
. . . . . . . . . . . . .
190 . 191 . 193 . 200 . 201 . 203 . 204 . 208 . 215 . 215 . 215 . 218 . 222 . 223
. . . . . . .
226 . 226 . 229 . 236 . 241 . 241 . 241 . 243
. . . . . . . . . .
248 . 248 . 252 . 256 . 261 . 262 . 269 . 269 . 269 . 271 . 271
CONTENTS
9.11 The Neuroscientific Basis for Convolutional Networks . . . . . . . 273 9.12 Convolutional Networks and the History of Deep Learning . . . . 280 10 Sequence Modeling: Recurrent and Recursive Nets 281 10.1 Unfolding Flow Graphs and Sharing Parameters . . . . . . . . . . 282 10.2 Recurrent Neural Networks . . . . . . . . . . . . . . . . . . . . . 284 10.3 Bidirectional RNNs . . . . . . . . . . . . . . . . . . . . . . . . . . 295 10.4 Deep Recurrent Networks . . . . . . . . . . . . . . . . . . . . . . 296 10.5 Recursive Neural Networks . . . . . . . . . . . . . . . . . . . . . 299 10.6 Auto-Regressive Networks . . . . . . . . . . . . . . . . . . . . . . 299 10.7 Facing the Challenge of Long-Term Dependencies . . . . . . . . . 305 10.8 Handling Temporal Dependencies with N-Grams, HMMs, CRFs and Other Graphical Models . . . . . . . . . . . . . . . . . . . . . 317 10.9 Combining Neural Networks and Search . . . . . . . . . . . . . . 328 11 Practical methodology 11.1 Basic Machine Learning Methodology . . . 11.2 Manual Hyperparameter Tuning . . . . . . 11.3 Hyper-parameter Optimization Algorithms . 11.4 Debugging Strategies . . . . . . . . . . . . . 12 Applications 12.1 Large Scale Deep Learning . . . . 12.2 Computer Vision . . . . . . . . . 12.3 Speech Recognition . . . . . . . . 12.4 Natural Language Processing and 12.5 Structured Outputs . . . . . . . . 12.6 Other Applications . . . . . . . .
III
Deep Learning Research
. . . .
. . . .
. . . .
. . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Neural Language . . . . . . . . . . . . . . . . . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . . . . . . . . . . . . . Models . . . . . . . . . .
. . . . . . . . . .
. . . .
333 . 333 . 334 . 334 . 336
. . . . . .
339 . 339 . 345 . 352 . 353 . 369 . 369
370
13 Structured Probabilistic Models for Deep Learning 372 13.1 The Challenge of Unstructured Modeling . . . . . . . . . . . . . . 373 13.2 Using Graphs to Describe Model Structure . . . . . . . . . . . . . 377 13.3 Advantages of Structured Modeling . . . . . . . . . . . . . . . . . 391 13.4 Learning About Dependencies . . . . . . . . . . . . . . . . . . . . 392 13.5 Inference and Approximate Inference Over Latent Variables . . . 394 13.6 The Deep Learning Approach to Structured Probabilistic Models 395 14 Monte Carlo Methods 400 14.1 Markov Chain Monte Carlo Methods . . . . . . . . . . . . . . . . 400 iv
CONTENTS
14.2 The Difficulty of Mixing Between Well-Separated Modes . . . . . 402 15 Linear Factor Models and Auto-Encoders 15.1 Regularized Auto-Encoders . . . . . . . . . . . 15.2 Denoising Auto-encoders . . . . . . . . . . . . . 15.3 Representational Power, Layer Size and Depth 15.4 Reconstruction Distribution . . . . . . . . . . . 15.5 Linear Factor Models . . . . . . . . . . . . . . . 15.6 Probabilistic PCA and Factor Analysis . . . . . 15.7 Reconstruction Error as Log-Likelihood . . . . 15.8 Sparse Representations . . . . . . . . . . . . . . 15.9 Denoising Auto-Encoders . . . . . . . . . . . . 15.10 Contractive Auto-Encoders . . . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
. . . . . . . . . .
404 . 405 . 408 . 410 . 411 . 412 . 413 . 417 . 418 . 423 . 428
16 Representation Learning 431 16.1 Greedy Layerwise Unsupervised Pre-Training . . . . . . . . . . . 432 16.2 Transfer Learning and Domain Adaptation . . . . . . . . . . . . . 439 16.3 Semi-Supervised Learning . . . . . . . . . . . . . . . . . . . . . . 446 16.4 Semi-Supervised Learning and Disentangling Underlying Causal Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 16.5 Assumption of Underlying Factors and Distributed Representation449 16.6 Exponential Gain in Representational Efficiency from Distributed Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453 16.7 Exponential Gain in Representational Efficiency from Depth . . . 455 16.8 Priors Regarding The Underlying Factors . . . . . . . . . . . . . 457 17 The 17.1 17.2 17.3 17.4 17.5
Manifold Perspective on Representation Learning 461 Manifold Interpretation of PCA and Linear Auto-Encoders . . . 469 Manifold Interpretation of Sparse Coding . . . . . . . . . . . . . 472 The Entropy Bias from Maximum Likelihood . . . . . . . . . . . 472 Manifold Learning via Regularized Auto-Encoders . . . . . . . . 473 Tangent Distance, Tangent-Prop, and Manifold Tangent Classifier 474
18 Confronting the Partition Function 18.1 The Log-Likelihood Gradient of Energy-Based Models . . . 18.2 Stochastic Maximum Likelihood and Contrastive Divergence 18.3 Pseudolikelihood . . . . . . . . . . . . . . . . . . . . . . . . 18.4 Score Matching and Ratio Matching . . . . . . . . . . . . . 18.5 Denoising Score Matching . . . . . . . . . . . . . . . . . . . 18.6 Noise-Contrastive Estimation . . . . . . . . . . . . . . . . . 18.7 Estimating the Partition Function . . . . . . . . . . . . . . v
. . . . . . .
. . . . . . .
478 . 479 . 481 . 488 . 490 . 492 . 492 . 494
CONTENTS
19 Approximate inference 19.1 Inference as Optimization . . . . . . . . . . . . . . . . . 19.2 Expectation Maximization . . . . . . . . . . . . . . . . . 19.3 MAP Inference: Sparse Coding as a Probabilistic Model 19.4 Variational Inference and Learning . . . . . . . . . . . . 19.5 Stochastic Inference . . . . . . . . . . . . . . . . . . . . 19.6 Learned Approximate Inference . . . . . . . . . . . . . . 20 Deep Generative Models 20.1 Boltzmann Machines . . . . . . . . . . . . . 20.2 Restricted Boltzmann Machines . . . . . . . 20.3 Training Restricted Boltzmann Machines . . 20.4 Deep Belief Networks . . . . . . . . . . . . . 20.5 Deep Boltzmann Machines . . . . . . . . . . 20.6 Boltzmann Machines for Real-Valued Data . 20.7 Convolutional Boltzmann Machines . . . . . 20.8 Other Boltzmann Machines . . . . . . . . . 20.9 Directed Generative Nets . . . . . . . . . . 20.10 A Generative View of Autoencoders . . . . 20.11 Generative Stochastic Networks . . . . . . . 20.12 Methodological Notes . . . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
. . . . . . . . . . . .
. . . . . .
502 . 504 . 505 . 506 . 507 . 511 . 511
. . . . . . . . . . . .
513 . 513 . 516 . 519 . 523 . 526 . 537 . 540 . 541 . 541 . 543 . 549 . 551
Bibliography
555
Index
593
vi
Acknowledgments This book would not have been possible without the contributions of many people. We would like to thank those who commented on our proposal for the book and helped plan its contents and organization: Hugo Larochelle, Guillaume Alain, Kyunghyun Cho, C ¸ a˘glar G¨ul¸cehre, Razvan Pascanu, David Krueger and Thomas Roh´ee. We would like to thank the people who offered feedback on the content of the book itself. Some offered feedback on many chapters: Julian Serban, Laurent Dinh, Guillaume Alain, Kelvin Xu, Ilya Sutskever, Vincent Vanhoucke, David Warde-Farley, Jurgen Van Gael, Dustin Webb, Johannes Roith, Ion Androutsopoulos, Pawel Chilinski, Halis Sak, Fr´ed´eric Francis, Jonathan Hunt, and Grigory Sapunov. We would also like to thank those who provided us with useful feedback on individual chapters: • Chapter 1, Introduction: Johannes Roith, Eric Morris, Samira Ebrahimi, Ozan C ¸ a˘glayan, Mart´ın Abadi, and Sebastien Bratieres. • Chapter 2, Linear Algebra: Pierre Luc Carrier, Li Yao, Thomas Roh´ee, Colby Toland, Amjad Almahairi, Sergey Oreshkov, Istv´an Petr´as, and Dennis Prangle. • Chapter 3, Probability and Information Theory: Rasmus Antti, Stephan Gouws, Vincent Dumoulin, Artem Oboturov, Li Yao, John Philip Anderson, and Kai Arulkumaran. • Chapter 4, Numerical Computation: Meire Fortunato, and Tran Lam An. • Chapter 5, Machine Learning Basics: Dzmitry Bahdanau, and Meire Fortunato. • Chapter 8, Optimization for Training Deep Models: Marcel Ackermann. • Chapter 9, Convolutional Networks: Mehdi Mirza, C ¸ a˘glar G¨ul¸cehre, and Mart´ın Arjovsky. vii
CONTENTS
• Chapter 10, Sequence Modeling: Recurrent and Recursive Nets: Dmitriy Serdyuk, and Dongyu Shi. • Chapter 18, Confronting the Partition Function: Sam Bowman, and Ozan C ¸ a˘glayan. • Bibliography, Leslie N. Smith. We also want to thank those who allowed us to reproduce images, figures or data from their publications: David Warde-Farley, Matthew D. Zeiler, Rob Fergus, Chris Olah, Jason Yosinski, Nicolas Chapados, and James Bergstra. We indicate their contributions in the figure captions throughout the text. Finally, we would like to thank Google for allowing Ian Goodfellow to work on the book as his 20% project while working at Google. In particular, we would like to thank Ian’s former manager, Greg Corrado, and his subsequent manager, Samy Bengio, for their support of this effort.
viii
Notation This section provides a concise reference describing the notation used throughout this book. If you are unfamiliar with any of these mathematical concepts, this notation reference may seem intimidating. However, do not despair, we describe most of these ideas in chapters 1-3. Numbers and Arrays a
A scalar (integer or real) value with the name “a”
a
A vector with the name “a”
A
A matrix with the name “A”
A
A tensor with the name “A”
In
Identity matrix with n rows and n columns
I
Identity matrix with dimensionality implied by context
ei
Standard basis vector [0, . . . , 0, 1, 0, . . . , 0] with a 1 at position i.
diag(a)
A square, diagonal matrix with entries given by a
a
A scalar random variable with the name “a”
a
A vector-valued random variable with the name “a”
A
A matrix-valued random variable with the name “A”
ix
CONTENTS
Sets and Graphs A
A set with the name “A”
R
The set of real numbers
{0, 1}
The set containing 0 and 1
{0, 1, . . . , n}
The set of all integers between 0 and n
[a, b]
The real interval including a and b
(a, b]
The real interval excluding a but including b
A\B
Set subtraction, i.e., the elements of A that are not in B
G
A graph with the name “G”
P a G (xi )
The parents of xi in G . Indexing
ai
Element i of vector a, with indexing starting at 1
a −i
All elements of vector a except for element i
Ai,j
Element i, j of matrix A
Ai,:
Row i of matrix A
A:,i
Column i of matrix A
A i,j,k
Element (i, j, k) of a 3-D tensor A
A:,:,i
2-D slice of a 3-D tensor
ai
Element i of the random vector a
x(t)
The t-th example (input) from a dataset
y (t) or y (t) X
The target associated with x(t) for supervised learning The matrix of input examples, with one row per example x (t) . Linear Algebra Operations
A>
Transpose of matrix A
A+
Moore-Penrose pseudoinverse of A
AB
Element-wise (Hadamard) product of A and B x
CONTENTS
Calculus dy dx ∂y ∂x ∇x y
Derivative of y with respect to x Partial derivative of y with respect to x Gradient of y with respect to x
∇X y ∂f ∂x H (f )(x) Z f (x)dx Z f (x)dx
Matrix derivatives of y with respect to x Jacobian matrix J ∈ Rm×n of a function f : Rn → Rm The Hessian matrix of f at input point x Definite integral over the entire domain of x Definite integral with respect to x over the set S
S
Probability and Information Theory
a⊥b
The random variables a and b are independent.
a⊥b | c
They are are conditionally independent given c.
Ex∼P [f (x)] or Ef (x) Var(f (x))
Expectation of f (x) with respect to P (x) Variance of f (x) under P (x)
Cov(f (x), g(x))
Covariance of f (x) and g(x) under P (x, y)
H (x)
Shannon entropy of the random variable x
D KL(P kQ)
Kullback-Leibler divergence of P and Q Functions
f ◦g
Composition of the functions f and g
f (x; θ)
A function of x parameterized by θ
log x
Natural logarithm of x
σ(x)
Logistic sigmoid, 1/(1 + exp(−x))
ζ(x)
Softplus, log(1 + exp(x))
||x|| p
Lp norm of x
x+
1condition
Positive part of x, i.e., max(0, x) is 1 if the condition is true, 0 otherwise. xi
CONTENTS
Sometimes we write f (x), f (X), or f (X), when f is a function of a scalar rather than a vector, matrix, or tensor. In this case, we mean to apply f to the array element-wise. For example, if C = σ(X), then Ci,j,k = σ(Xi,j,k ) for all valid values of i, j, and k.
xii
Chapter 1
Introduction Inventors have long dreamed of creating machines that think. Ancient Greek myths tell of intelligent objects, such as animated statues of human beings and tables that arrive full of food and drink when called. When programmable computers were first conceived, people wondered whether they might become intelligent, over a hundred years before one was built (Lovelace, 1842). Today, artificial intelligence (AI) is a thriving field with many practical applications and active research topics. We look to intelligent software to automate routine labor, understand speech or images, make diagnoses in medicine, and to support basic scientific research. In the early days of artificial intelligence, the field rapidly tackled and solved problems that are intellectually difficult for human beings but relatively straightforward for computers—problems that can be described by a list of formal, mathematical rules. The true challenge to artificial intelligence proved to be solving the tasks that are easy for people to perform but hard for people to describe formally—problems that we solve intuitively, that feel automatic, like recognizing spoken words or faces in images. This book is about a solution to these more intuitive problems. This solution is to allow computers to learn from experience and understand the world in terms of a hierarchy of concepts, with each concept defined in terms of its relation to simpler concepts. By gathering knowledge from experience, this approach avoids the need for human operators to formally specify all of the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones. If we draw a graph showing how these concepts are built on top of each other, the graph is deep, with many layers. For this reason, we call this approach to AI deep learning. Many of the early successes of AI took place in relatively sterile and formal environments and did not require computers to have much knowledge about the 1
CHAPTER 1. INTRODUCTION
world. For example, IBM’s Deep Blue chess-playing system defeated world champion Garry Kasparov in 1997 (Hsu, 2002). Chess is of course a very simple world, containing only sixty-four locations and thirty-two pieces that can move in only rigidly circumscribed ways. Devising a successful chess strategy is a tremendous accomplishment, but the challenge is not due to the difficulty of describing the relevant concepts to the computer. Chess can be completely described by a very brief list of completely formal rules, easily provided ahead of time by the programmer. Ironically, abstract and formal tasks that are among the most difficult mental undertakings for a human being are among the easiest for a computer. Computers have long been able to defeat even the best human chess player, but are only recently matching some of the abilities of average human beings to recognize objects or speech. A person’s everyday life requires an immense amount of knowledge about the world, and much of this knowledge is subjective and intuitive, and therefore difficult to articulate in a formal way. Computers need to capture this same knowledge in order to behave in an intelligent way. One of the key challenges in artificial intelligence is how to get this informal knowledge into a computer. Several artificial intelligence projects have sought to hard-code knowledge about the world in formal languages. A computer can reason about statements in these formal languages automatically using logical inference rules. This is known as the knowledge base approach to artificial intelligence. None of these projects has lead to a major success. One of the most famous such projects is Cyc (Lenat and Guha, 1989). Cyc is an inference engine and a database of statements in a language called CycL. These statements are entered by a staff of human supervisors. It is an unwieldy process. People struggle to devise formal rules with enough complexity to accurately describe the world. For example, Cyc failed to understand a story about a person named Fred shaving in the morning (Linde, 1992). Its inference engine detected an inconsistency in the story: it knew that people do not have electrical parts, but because Fred was holding an electric razor, it believed the entity “FredWhileShaving” contained electrical parts. It therefore asked whether Fred was still a person while he was shaving. The difficulties faced by systems relying on hard-coded knowledge suggest that AI systems need the ability to acquire their own knowledge, by extracting patterns from raw data. This capability is known as machine learning. The introduction of machine learning allowed computers to tackle problems involving knowledge of the real world and make decisions that appear subjective. A simple machine learning algorithm called logistic regression can determine whether to recommend cesarean delivery (Mor-Yosef et al., 1990). A simple machine learning algorithm called naive Bayes can separate legitimate e-mail from spam e-mail. 2
CHAPTER 1. INTRODUCTION
The performance of these simple machine learning algorithms depends heavily on the representation of the data they are given. For example, when logistic regression is used to recommend cesarean delivery, the AI system does not examine the patient directly. Instead, the doctor tells the system several pieces of relevant information, such as the presence or absence of a uterine scar. Each piece of information included in the representation of the patient is known as a feature. Logistic regression learns how each of these features of the patient correlates with various outcomes. However, it cannot influence the way that the features are defined in any way. If logistic regression was given a 3-D MRI image of the patient, rather than the doctor’s formalized report, it would not be able to make useful predictions. Individual voxels1 in an MRI scan have negligible correlation with any complications that might occur during delivery. This dependence on representations is a general phenomenon that appears throughout computer science and even daily life. In computer science, operations such as searching a collection of data can proceed exponentially faster if the collection is structured and indexed intelligently. People can easily perform arithmetic on Arabic numerals, but find arithmetic on Roman numerals much more time consuming. It is not surprising that the choice of representation has an enormous effect on the performance of machine learning algorithms. For a simple visual example, see Fig. 1.1. Many artificial intelligence tasks can be solved by designing the right set of features to extract for that task, then providing these features to a simple machine learning algorithm. For example, a useful feature for speaker identification from sound is the pitch. The pitch can be formally specified—it is the lowest frequency major peak of the spectrogram. It is useful for speaker identification because it is determined by the size of the vocal tract, and therefore gives a strong clue as to whether the speaker is a man, woman, or child. However, for many tasks, it is difficult to know what features should be extracted. For example, suppose that we would like to write a program to detect cars in photographs. We know that cars have wheels, so we might like to use the presence of a wheel as a feature. Unfortunately, it is difficult to describe exactly what a wheel looks like in terms of pixel values. A wheel has a simple geometric shape but its image may be complicated by shadows falling on the wheel, the sun glaring off the metal parts of the wheel, the fender of the car or an object in the foreground obscuring part of the wheel, and so on. One solution to this problem is to use machine learning to discover not only the mapping from representation to output but also the representation itself. This approach is known as representation learning. Learned representations of1
A voxel is the value at a single point in a 3-D scan, much as a pixel as the value at a single point in an image. 3
CHAPTER 1. INTRODUCTION
Figure 1.1: Example of different representations: suppose we want to separate two categories of data by drawing a line between them in a scatterplot. In the plot on the left, we represent some data using Cartesian coordinates, and the task is impossible. In the plot on the right, we represent the data with polar coordinates and the task becomes simple to solve with a vertical line. (Figure credit: David Warde-Farley)
ten result in much better performance than can be obtained with hand-designed representations. They also allow AI systems to rapidly adapt to new tasks, with minimal human intervention. A representation learning algorithm can discover a good set of features for a simple task in minutes, or a complex task in hours to months. Manually designing features for a complex task requires a great deal of human time and effort; it can take decades for an entire community of researchers. The quintessential example of a representation learning algorithm is the autoencoder. An autoencoder is the combination of an encoder function that converts the input data into a different representation, and a decoder function that converts the new representation back into the original format. Autoencoders are trained to preserve as much information as possible when an input is run through the encoder and then the decoder, but are also trained to make the new representation have various nice properties. Different kinds of autoencoders aim to achieve different kinds of properties. When designing features or algorithms for learning features, our goal is usually to separate the factors of variation that explain the observed data. In this context, we use the word “factors” simply to refer to separate sources of influence; the factors are usually not combined by multiplication. Such factors are often not quantities that are directly observed but they may exist either as unobserved objects or forces in the physical world that affect observable quantities, or they are constructs in the human mind that provide useful simplifying explanations 4
CHAPTER 1. INTRODUCTION
or inferred causes of the observed data. They can be thought of as concepts or abstractions that help us make sense of the rich variability in the data. When analyzing a speech recording, the factors of variation include the speaker’s age and sex, their accent, and the words that they are speaking. When analyzing an image of a car, the factors of variation include the position of the car, its color, and the angle and brightness of the sun. A major source of difficulty in many real-world artificial intelligence applications is that many of the factors of variation influence every single piece of data we are able to observe. The individual pixels in an image of a red car might be very close to black at night. The shape of the car’s silhouette depends on the viewing angle. Most applications require us to disentangle the factors of variation and discard the ones that we do not care about. Of course, it can be very difficult to extract such high-level, abstract features from raw data. Many of these factors of variation, such as a speaker’s accent, can only be identified using sophisticated, nearly human-level understanding of the data. When it is nearly as difficult to obtain a representation as to solve the original problem, representation learning does not, at first glance, seem to help us. Deep learning solves this central problem in representation learning by introducing representations that are expressed in terms of other, simpler representations. Deep learning allows the computer to build complex concepts out of simpler concepts. Fig. 1.2 shows how a deep learning system can represent the concept of an image of a person by combining simpler concepts, such as corners and contours, which are in turn defined in terms of edges. The quintessential example of a deep learning model is the multilayer perceptron (MLP). A multilayer perceptron is just a mathematical function mapping some set of input values to output values. The function is formed by composing many simpler functions. We can think of each application of a different mathematical function as providing a new representation of the input. The idea of learning the right representation for the data provides one perspective on deep learning. Another perspective on deep learning is that it allows the computer to learn a multi-step computer program. Each layer of the representation can be thought of as the state of the computer’s memory after executing another set of instructions in parallel. Networks with greater depth can execute more instructions in sequence. Being able to execute instructions sequentially offers great power because later instructions can refer back to the results of earlier instructions. According to this view of deep learning, not all of the information in a layer’s representation of the input necessarily encodes factors of variation that explain the input. The representation is also used to store state information that helps to execute a program that can make sense of the input. This state 5
CHAPTER 1. INTRODUCTION
CAR
PERSON
ANIMAL
Output (object identity)
3rd hidden layer (object parts)
2nd hidden layer (corners and contours)
1st hidden layer (edges)
Visible layer (input pixels)
Figure 1.2: Illustration of a deep learning model. It is difficult for a computer to understand the meaning of raw sensory input data, such as this image represented as a collection of pixel values. The function mapping from a set of pixels to an object identity is very complicated. Learning or evaluating this mapping seems insurmountable if tackled directly. Deep learning resolves this difficulty by breaking the desired complicated mapping into a series of nested simple mappings, each described by a different layer of the model. The input is presented at the visible layer, so named because it contains the variables that we are able to observe. Then a series of hidden layers extracts increasingly abstract features from the image. These layers are called “hidden” because their values are not given in the data; instead the model must determine which concepts are useful for explaining the relationships in the observed data. The images here are visualizations of the kind of feature represented by each hidden unit. Given the pixels, the first layer can easily identify edges, by comparing the brightness of neighboring pixels. Given the first hidden layer’s description of the edges, the second hidden layer can easily search for corners and extended contours, which are recognizable as collections of edges. Given the second hidden layer’s description of the image in terms of corners and contours, the third hidden layer can detect entire parts of specific objects, by finding specific collections of contours and corners. Finally, this description of the image in terms of the object parts it contains can be used to recognize the objects present in the image. Images reproduced with permission from Zeiler and Fergus (2014). 6
CHAPTER 1. INTRODUCTION
Element set
+ ⇥
w
+
⇥ 1
x
Element set
1
w 2
Logistic Regression
x
2
Logistic Regression
w
x
Figure 1.3: Illustration of computational flow graphs mapping an input to an output where each node performs an operation. Depth is the length of the longest path from input to output but depends on the definition of what constitutes a possible computational step. The computation depicted in these graphs is the output of a logistic regression model, σ(w T x), where σ is the logistic sigmoid function. If we use addition, multiplication, and logistic sigmoids as the elements of our computer language, then this model has depth three. If we view logistic regression as an element itself, then this model has depth one.
information could be analogous to a counter or pointer in a traditional computer program. It has nothing to do with the content of the input specifically, but it helps the model to organize its processing. There are two main ways of measuring the depth of a model. The first view is based on the number of sequential instructions that must be executed to evaluate the architecture. We can think of this as the length longest path through a flow chart that describes how to compute each of the model’s outputs given its inputs. Just as two equivalent computer programs will have different lengths depending on which language the program is written in, the same function may be drawn as a flow chart with different depths depending on which functions we allow to be used as individual steps in the flow chart. Fig. 1.3 illustrates how this choice of language can give two different measurements for the same architecture. Another approach, used by deep probabilistic models, examples not the depth of the computational graph but the depth of the graph describing how concepts are related to each other. In this case, the depth of the flow-chart of the computations needed to compute the representation of each concept may be much deeper than the graph of the concepts themselves. This is because the system’s understanding of the simpler concepts can be refined given information about the more complex concepts. For example, an AI system observing an image of a face with one eye in shadow may initially only see one eye. After detecting that a face is present, it can 7
CHAPTER 1. INTRODUCTION
then infer that a second eye is probably present as well. In this case, the graph of concepts only includes two layers—a layer for eyes and a layer for faces—but the graph of computations includes 2n layers if we refine our estimate of each concept given the other n times. Because it is not always clear which of these two views—the depth of the computational graph, or the depth of the probabilistic modeling graph—is most relevant, and because different people choose different sets of smallest elements from which to construct their graphs, there is no single correct value for the depth of an architecture, just as there is no single correct value for length of a computer program. Nor is there a consensus about how much depth a model requires to qualify as “deep.” However, deep learning can safely be regarded as the study of models that either involve a greater amount of composition of learned functions or learned concepts than traditional machine learning does. To summarize, deep learning, the subject of this book, is an approach to AI. Specifically, it is a type of machine learning, a technique that allows computer systems to improve with experience and data. According to the authors of this book, machine learning is the only viable approach to building AI systems that can operate in complicated, real-world environments. Deep learning is a particular kind of machine learning that achieves great power and flexibility by learning to represent the world as a nested hierarchy of concepts and representations, with each concept defined in relation to simpler concepts, and more abstract representations computed in terms of less abstract ones. Fig. 1.4 illustrates the relationship between these different AI disciplines. Fig. 1.5 gives a high-level schematic of how each works.
1.1
This book can be useful for a variety of readers, but we wrote it with two main target audiences in mind. One of these target audiences is university students (undergraduate or graduate) learning about machine learning, including those who are beginning a career in deep learning and artificial intelligence research. The other target audience is software engineers who do not have a machine learning or statistics background, but want to rapidly acquire one and begin using deep learning in their product or platform. Software engineers working in a wide variety of industries are likely to find deep learning to be useful, as it has already proven successful in many areas including computer vision, speech and audio processing, natural language processing, robotics, bioinformatics and chemistry, video games, search engines, online advertising, and finance. This book has been organized into three parts in order to best accommodate a variety of readers. Part 1 introduces basic mathematical tools and machine 8
CHAPTER 1. INTRODUCTION
Deep learning Example: MLPs
Example: Shallow autoencoders
Example: Logistic regression
Example: Knowledge bases
Representation learning Machine learning
AI
Figure 1.4: A Venn diagram showing how deep learning is a kind of representation learning, which is in turn a kind of machine learning, which is used for many but not all approaches to AI. Each section of the Venn diagram includes an example of an AI technology.
9
CHAPTER 1. INTRODUCTION
Figure 1.5: Flow-charts showing how the different parts of an AI system relate to each other within different AI disciplines. Shaded boxes indicate components that are able to learn from data.
10
CHAPTER 1. INTRODUCTION
learning concepts. Part 2 describes the most established deep learning algorithms that are essentially solved technologies. Part 3 describes more speculative ideas that are widely believed to be important for future research in deep learning. Readers should feel free to skip parts that are not relevant given their interests or background. Readers familiar with linear algebra, probability, and fundamental machine learning concepts can skip part 1, for example, while readers who just want to implement a working system need not read beyond part 2. We do assume that all readers come from a computer science background. We assume familiarity with programming, a basic understanding of computational performance issues, complexity theory, introductory level calculus, and some of the terminology of graph theory.
1.2
Historical Trends in Deep Learning
It is easiest to understand deep learning with some historical context. Rather than providing a detailed history of deep learning, we identify a few key trends: • Deep learning has had a long and rich history, but has gone by many names reflecting different philosophical viewpoints, and has waxed and waned in popularity. • Deep learning has become more useful as the amount of available training data has increased. • Deep learning models have grown in size over time as computer hardware and software infrastructure for deep learning has improved. • Deep learning has solved increasingly complicated applications with increasing accuracy over time.
1.2.1
The Many Names and Changing Fortunes of Neural Networks
We expect that many readers of this book have heard of deep learning as an exciting new technology, and are surprised to see a mention of “history” in a book about an emerging field. In fact, deep learning has a long and rich history. Deep learning only appears to be new, because it was relatively unpopular for several years preceding its current popularity, and because it has gone through many different names. While the term “deep learning” is relatively new, the field dates back to the 1950s. The field has been rebranded many times, reflecting the influence of different researchers and different perspectives. 11
CHAPTER 1. INTRODUCTION
A comprehensive history of deep learning is beyond the scope of this pedagogical textbook. However, some basic context is useful for understanding deep learning. Broadly speaking, there have been three waves of development of deep learning: deep learning known as cybernetics in the 1940s-1960s, deep learning known as connectionism in the 1980s-1990s, and the current resurgence under the name deep learning beginning in 2006. See Figure 1.6 for a basic timeline.
Figure 1.6: The three historical waves of artificial neural nets research, starting with cybernetics in the 1940-1960’s, with the perceptron (Rosenblatt, 1958) to train a single neuron, then the connectionist approach of the 1980-1995 period, with backpropagation (Rumelhart et al., 1986a) to train a neural network with one or two hidden layers, and the current wave, deep learning, started around 2006 (Hinton et al., 2006; Bengio et al., 2007a; Ranzato et al., 2007a), which allows us to train very deep networks.
Some of the earliest learning algorithms we recognize today were intended to be computational models of biological learning, i.e. models of how learning happens or could happen in the brain. As a result, one of the names that deep learning has gone by is artificial neural networks (ANNs). The corresponding perspective on deep learning models is that they are engineered systems inspired by the biological brain (whether the human brain or the brain of another animal). The neural perspective on deep learning is motivated by two main ideas. One idea is that the brain provides a proof by example that intelligent behavior is possible, and a conceptually straightforward path to building intelligence is to reverse engineer the computational principles behind the brain and duplicate its functionality. Another perspective is that it would be deeply interesting to understand the brain and the principles that underlie human intelligence, so machine learning models that shed light on these basic scientific questions are useful apart from their ability to solve engineering applications. 12
CHAPTER 1. INTRODUCTION
The modern term “deep learning” goes beyond the neuroscientific perspective on the current breed of machine learning models. It appeals to a more general principle of learning multiple levels of composition, which can be applied in machine learning frameworks that are not necessarily neurally inspired. The earliest predecessors of modern deep learning were simple linear models motivated from a neuroscientific perspective. These models were designed to take a set of n input values x1, . . . , xn and associate them with an output y. These models would learn a set of weights w 1, . . . , wn and compute their output f (x, w) = x1 w1 + · · · + xn w n. This first wave of neural networks research was known as cybernetics (see Fig. 1.6). The McCulloch-Pitts Neuron (McCulloch and Pitts, 1943) was an early model of brain function. This linear model could recognize two different categories of inputs by testing whether f (x, w) is positive or negative. Of course, for the model to correspond to the desired definition of the categories, the weights needed to be set correctly. These weights could be set by the human operator. In the 1950s, the perceptron (Rosenblatt, 1958, 1962) became the first model that could learn the weights defining the categories given examples of inputs from each category. The Adaptive Linear Element (ADALINE), which dates from the about the same time, simply returned the value of f (x) itself to predict a real number (Widrow and Hoff, 1960), and could also learn to predict these numbers from data. These simple learning algorithms greatly affected the modern landscape of machine learning. The training algorithm used to adapt the weights of the ADALINE was a special case of an algorithm called stochastic gradient descent. Slightly modified versions of the stochastic gradient descent algorithm remain the dominant training algorithms for deep learning models today. Models based on the f (x, w) used by the perceptron and ADALINE are called linear models. These models remain some of the most widely used machine learning models, though in many cases they are trained in different ways than the original models were trained. Linear models have many limitations. Most famously, they cannot learn the XOR function, where f([0, 1], w) = 1 and f([1, 0], w) = 1 but f([1, 1], w) = 0 and f ([0, 0], w) = 0. Critics who observed these flaws in linear models caused a backlash against biologically inspired learning in general (Minsky and Papert, 1969). This is the first dip in the popularity of neural networks in our broad timeline (Fig. 1.6). Today, neuroscience is regarded as an important source of inspiration for deep learning researchers, but it is no longer the predominant guide for the field. The main reason for the diminished role of neuroscience in deep learning research today is that we simply do not have enough information about the brain to use it as a guide. To obtain a deep understanding of the actual algorithms 13
CHAPTER 1. INTRODUCTION
used by the brain, we would need to be able to monitor the activity of (at the very least) thousands of interconnected neurons simultaneously. Because we are not able to do this, we are far from understanding even some of the most simple and well-studied parts of the brain (Olshausen and Field, 2005). Neuroscience has given us a reason to hope that a single deep learning algorithm can solve many different tasks. Neuroscientists have found that ferrets can learn to “see” with the auditory processing region of their brain if their brains are rewired to send visual signals to that area (Von Melchner et al., 2000). This suggests that much of the mammalian brain might use a single algorithm to solve most of the different tasks that the brain solves. Before this hypothesis, machine learning research was more fragmented, with different communities of researchers studying natural language processing, vision, motion planning, and speech recognition. Today, these application communities are still separate, but it is common for deep learning research groups to study many or even all of these application areas simultaneously. We are able to draw some rough guidelines from neuroscience. The basic idea of having many computational units that become intelligent only via their interactions with each other is inspired by the brain. The Neocognitron (Fukushima, 1980) introduced a powerful model architecture for processing images that was inspired by the structure of the mammalian visual system and later became the basis for the modern convolutional network (LeCun et al., 1998a), as we will see in Chapter 9.11. Most neural networks today are based on a model neuron called the rectified linear unit. These units were developed from a variety of viewpoints, with (Nair and Hinton, 2010b) and Glorot et al. (2011a) citing neuroscience as an influence, and Jarrett et al. (2009a) citing more engineering-oriented influences. While neuroscience is an important source of inspiration, it need not be taken as a rigid guide. We know that actual neurons compute very different functions than modern rectified linear units, but greater neural realism has not yet found a machine learning value or interpretation. Also, while neuroscience has successfully inspired several neural network architectures, we do not yet know enough about biological learning for neuroscience to offer much guidance for the learning algorithms we use to train these architectures. Media accounts often emphasize the similarity of deep learning to the brain. While it is true that deep learning researchers are more likely to cite the brain as an influence than researchers working in other machine learning fields such as kernel machines or Bayesian statistics, one should not view deep learning as an attempt to simulate the brain. Modern deep learning draws inspiration from many fields, especially applied math fundamentals like linear algebra, probability, information theory, and numerical optimization. While some deep learning researchers cite neuroscience as an important influence, others are not concerned 14
CHAPTER 1. INTRODUCTION
with neuroscience at all. It is worth noting that the effort to understand how the brain works on an algorithmic level is alive and well. This endeavor is primarily known as “computational neuroscience” and is a separate field of study from deep learning. It is common for researchers to move back and forth between both fields. The field of deep learning is primarily concerned with how to build computer systems that are able to successfully solve tasks requiring intelligence, while the field of computational neuroscience is primarily concerned with building more accurate models of how the brain actually works. In the 1980s, the second wave of neural network research emerged in great part via a movement called connectionism or parallel distributed processing (Rumelhart et al., 1986d). Connectionism arose in the context of cognitive science. Cognitive science is an interdisciplinary approach to understanding the mind, combining multiple different levels of analysis. During the early 1980s, most cognitive scientists studied models of symbolic reasoning. Despite their popularity, symbolic models were difficult to explain in terms of how the brain could actually implement them using neurons. The connectionists began to study models of cognition that could actually be grounded in neural implementations, reviving many ideas dating back to the work of psychologist Donald Hebb in the 1940s (Hebb, 1949). The central idea in connectionism is that a large number of simple computational units can achieve intelligent behavior when networked together. This insight applies equally to neurons in biological nervous systems and to hidden units in computational models. Several key concepts arose during the connectionism movement of the 1980s that remain central to today’s deep learning. One of these concepts is that of distributed representation. This is the idea that each input to a system should be represented by many features, and each feature should be involved in the representation of many possible inputs. For example, suppose we have a vision system that can recognize cars, trucks, and birds and these objects can each be red, green, or blue. One way of representing these inputs would be to have a separate neuron or hidden unit that activates for each of the nine possible combinations: red truck, red car, red bird, green truck, and so on. This requires nine different neurons, and each neuron must independently learn the concept of color and object identity. One way to improve on this situation is to use a distributed representation, with three neurons describing the color and three neurons describing the object identity. This requires only six neurons total instead of nine, and the neuron describing redness is able to learn about redness from images of cars, trucks, and birds, not only from images of one specific category of objects. The concept of distributed representation is central to this book, and will be described in greater detail in Chapter 16. 15
CHAPTER 1. INTRODUCTION
Another major accomplishment of the connectionist movement was the successful use of back-propagation to train deep neural networks with internal representations and the popularization of the back-propagation algorithm (Rumelhart et al., 1986a; LeCun, 1987). This algorithm has waxed and waned in popularity but as of this writing is currently the dominant approach to training deep models. The second wave of neural networks research lasted until the mid-1990s. At that point, the popularity of neural networks declined again. This was in part due to a negative reaction to the failure of neural networks (and AI research in general) to fulfill excessive promises made by a variety of people seeking investment in neural network-based ventures, but also due to improvements in other fields of machine learning: kernel machines (Boser et al., 1992; Cortes and Vapnik, 1995; Sch¨olkopf et al., 1999) and graphical models (Jordan, 1998). Kernel machines enjoy many nice theoretical guarantees. In particular, training a kernel machine is a convex optimization problem (this will be explained in more detail in Chapter 4) which means that the training process can be guaranteed to find the optimal model efficiently. This made kernel machines very amenable to software implementations that “just work” without much need for the human operator to understand the underlying ideas. Soon, most machine learning applications consisted of manually designing good features to provide to a kernel machine for each different application area. During this time, neural networks continued to obtain impressive performance on some tasks (LeCun et al., 1998b; Bengio et al., 2001a). The Canadian Institute for Advanced Research (CIFAR) helped to keep neural networks research alive via its Neural Computation and Adaptive Perception research initiative. This program united machine research groups led by Geoffrey Hinton at University of Toronto, Yoshua Bengio at University of Montreal, and Yann LeCun at New York University. It had a multi-disciplinary nature that also included neuroscientists and experts in human and computer vision. At this point in time, deep networks were generally believed to be very difficult to train. We now know that algorithms that have existed since the 1980s work quite well, but this was not apparent circa 2006. The issue is perhaps simply that these algorithms were too computationally costly to allow much experimentation with the hardware available at the time. The third wave of neural networks research began with a breakthrough in 2006. Geoffrey Hinton showed that a kind of neural network called a deep belief network could be efficiently trained using a strategy called greedy layer-wise pretraining (Hinton et al., 2006), which will be described in more detail in Chapter 16.1. The other CIFAR-affiliated research groups quickly showed that the same strategy could be used to train many other kinds of deep networks (Bengio et al., 2007a; Ranzato et al., 2007a) and systematically helped to improve gen16
CHAPTER 1. INTRODUCTION
eralization on test examples. This wave of neural networks research popularized the use of the term deep learning to emphasize that researchers were now able to train deeper neural networks than had been possible before, and to emphasize the theoretical importance of depth (Bengio and LeCun, 2007a; Delalleau and Bengio, 2011; Pascanu et al., 2014a; Montufar et al., 2014). Deep neural networks displaced kernel machines with manually designed features for several important application areas during this time—in part because the time and memory cost of training a kernel machine is quadratic in the size of the dataset, and datasets grew to be large enough for this cost to outweigh the benefits of convex optimization. This third wave of popularity of neural networks continues to the time of this writing, though the focus of deep learning research has changed dramatically within the time of this wave. The third wave began with a focus on new unsupervised learning techniques and the ability of deep models to generalize well from small datasets, but today there is more interest in much older supervised learning algorithms and the ability of deep models to leverage large labeled datasets.
1.2.2
Increasing Dataset Sizes
One may wonder why deep learning has only recently become recognized as a crucial technology if it has existed since the 1950s. Deep learning has been successfully used in commercial applications since the 1990s, but was often regarded as being more of an art than a technology and something that only an expert could use, until recently. It is true that some skill is required to get good performance from a deep learning algorithm. Fortunately, the amount of skill required reduces as the amount of training data increases. The learning algorithms reaching human performance on complex tasks today are nearly identical to the learning algorithms that struggled to solve toy problems in the 1980s, though the models we train with these algorithms have undergone changes that simplify the training of very deep architectures. The most important new development is that today we can provide these algorithms with the resources they need to succeed. Fig. 1.7 shows how the size of benchmark datasets has increased remarkably over time. This trend is driven by the increasing digitization of society. As more and more of our activities take place on computers, more and more of what we do is recorded. As our computers are increasingly networked together, it becomes easier to centralize these records and curate them into a dataset appropriate for machine learning applications. The age of “Big Data” has made machine learning much easier because the key burden of statistical estimation—generalizing well to new data after observing only a small amount of data—has been considerably lightened. As of 2015, a rough rule of thumb is that a supervised deep learning algorithm will generally achieve acceptable performance with around 5,000 labeled examples per category, and will match or exceed human performance when 17
CHAPTER 1. INTRODUCTION
trained with a dataset containing at least 10 million labeled examples. Working successfully with datasets smaller than this is an important research area, focusing in particular on how we can take advantage of large quantities of unlabeled examples, with unsupervised or semi-supervised learning.
1.2.3
Increasing Model Sizes
Another key reason that neural networks are wildly successful today after enjoying comparatively little success since the 1980s is that we have the computational resources to run much larger models today. One of the main insights of connectionism is that animals become intelligent when many of their neurons work together. An individual neuron or small collection of neurons is not particularly useful. Biological neurons are not especially densely connected. As seen in Fig. 1.8, our machine learning models have had a number of connections per neuron that was within an order of magnitude of even mammalian brains for decades. In terms of the total number of neurons, neural networks have been astonishingly small until quite recently, as shown in Fig. 1.9. Since the introduction of hidden units, artificial neural networks have doubled in size roughly every 2.4 years. This growth is driven by faster computers with larger memory and by the availability of larger datasets. Larger networks are able to achieve higher accuracy on more complex tasks. This trend looks set to continue for decades. Unless new technologies allow faster scaling, artificial neural networks will not have the same number of neurons as the human brain until at least the 2050s. Biological neurons may represent more complicated functions than current artificial neurons, so biological neural networks may be even larger than this plot portrays. In retrospect, it is not particularly surprising that neural networks with fewer neurons than a leech were unable to solve sophisticated artificial intelligence problems. Even today’s networks, which we consider quite large from a computational systems point of view, are smaller than the nervous system of even relatively primitive vertebrate animals like frogs. The increase in model size over time, due to the availability of faster CPUs, the advent of general purpose GPUs, faster network connectivity, and better software infrastructure for distributed computing, is one of the most important trends in the history of deep learning. This trend is generally expected to continue well into the future.
18
Dataset size (number examples, logarithmic scale)
CHAPTER 1. INTRODUCTION
Increasing dataset size over time
109
108
WMT English→French
107
ImageNet10k
106
ImageNet Public SVHN
105 104
Sports-1M
ILSVRC 2014
MNIST
Criminals
CIFAR-10
103 Iris
102
T vs G vs F
Rotated T vs C
101 100
1900
1950
1985
2000
2015
Year (logarithmic scale)
Figure 1.7: Dataset sizes have increased greatly over time. In the early 1900s, statisticians studied datasets using hundreds or thousands of manually compiled measurements (Garson, 1900; Gosset, 1908; Anderson, 1935; Fisher, 1936). In the 1950s through 1980s, the pioneers of biologically-inspired machine learning often worked with small, synthetic datasets, such as low-resolution bitmaps of letters, that were designed to incur low computational cost and demonstrate that neural networks were able to learn specific kinds of functions (Widrow and Hoff, 1960; Rumelhart et al., 1986b). In the 1980s and 1990s, machine learning became more statistical in nature and began to leverage larger datasets containing tens of thousands of examples such as the MNIST dataset of scans of handwritten numbers (LeCun et al., 1998b). In the first decade of the 2000s, more sophisticated datasets of this same size, such as the CIFAR-10 dataset (Krizhevsky and Hinton, 2009) continued to be produced. Toward the end of that decade and throughout the first half of the 2010s, significantly larger datasets, containing hundreds of thousands to tens of millions of examples, completely changed what was possible with deep learning. These datasets included the public Street View House Numbers dataset(Netzer et al., 2011), various versions of the ImageNet dataset (Deng et al., 2009, 2010a; Russakovsky et al., 2014a), and the Sports-1M dataset (Karpathy et al., 2014). At the top of the graph, we see that datasets of translated sentences, such as IBM’s dataset constructed from the Canadian Hansard (Brown et al., 1990) and the WMT 2014 dataset (Schwenk, 2014) are typically far ahead of other dataset sizes. 19
Connections per neuron (logarithmic scale)
CHAPTER 1. INTRODUCTION
Number of connections per neuron over time 104
Human 9
Cat
6 7 10
3
4 2
Mouse
10 5 8
102
Fruit fly 3 1
101
1950
1985
2000
2015
Year
Figure 1.8: Initially, the number of connections between neurons in artificial neural networks was limited by hardware capabilities. Today, the number of connections between neurons is mostly a design consideration. Some artificial neural networks have nearly as many connections per neuron as a cat, and it is quite common for other neural networks to have as many connections per neuron as smaller mammals like mice. Even the human brain does not have an exorbitant amount of connections per neuron. The sparse connectivity of biological neural networks means that our artificial networks are able to match the performance of biological neural networks despite limited hardware. Modern neural networks are much smaller than the brains of any vertebrate animal, but we typically train each network to perform just one task, while an animal’s brain has different areas devoted to different tasks. Biological neural network sizes from Wikipedia (2015). 1. Adaptive Linear Element (Widrow and Hoff, 1960) 2. Neocognitron (Fukushima, 1980) 3. GPU-accelerated convolutional network (Chellapilla et al., 2006) 4. Deep Boltzmann machines (Salakhutdinov and Hinton, 2009a) 5. Unsupervised convolutional network (Jarrett et al., 2009b) 6. GPU-accelerated multilayer perceptron (Ciresan et al., 2010) 7. Distributed autoencoder (Le et al., 2012) 8. Multi-GPU convolutional network (Krizhevsky et al., 2012a) 9. COTS HPC unsupervised convolutional network (Coates et al., 2013) 10. GoogLeNet (Szegedy et al., 2014a)
20
CHAPTER 1. INTRODUCTION
1.2.4
Increasing Accuracy, Application Complexity, and RealWorld Impact
Since the 1980s, deep learning has consistently improved in its ability to provide accurate recognition or prediction. Moreover, deep learning has consistently been applied with success to broader and broader sets of applications. The earliest deep models were used to recognize individual objects in tightly cropped, extremely small images (Rumelhart et al., 1986a). Since then there has been a gradual increase in the size of images neural networks could process. Modern object recognition networks process rich high-resolution photographs and do not have a requirement that the photo be cropped near the object to be recognized(Krizhevsky et al., 2012b). Similarly, the earliest networks could only recognize two kinds of objects (or in some cases, the absence or presence of a single kind of object), while these modern networks typically recognize at least 1,000 different categories of objects. The largest contest in object recognition is the ImageNet Large-Scale Visual Recognition Competition held each year. A dramatic moment in the meteoric rise of deep learning came when a convolutional network won this challenge for the first time and by a wide margin, bringing down the state-of-the-art error rate from 26.1% to 15.3% (Krizhevsky et al., 2012b). Since then, these competitions are consistently won by deep convolutional nets, and as of this writing, advances in deep learning had brought the latest error rate in this contest down to 6.5% as shown in Fig. 1.10, using even deeper networks (Szegedy et al., 2014a). Outside the framework of the contest, this error rate has now dropped to below 5% (Ioffe and Szegedy, 2015; Wu et al., 2015). Deep learning has also had a dramatic impact on speech recognition. After improving throughout the 1990s, the error rates for speech recognition stagnated starting in about 2000. The introduction of deep learning (Dahl et al., 2010; Deng et al., 2010b; Seide et al., 2011; Hinton et al., 2012a) to speech recognition resulted in a sudden drop of error rates by up to half! We will explore this history in more detail in Chapter 12.3.1. Deep networks have also had spectacular successes for pedestrian detection and image segmentation (Sermanet et al., 2013; Farabet et al., 2013a; Couprie et al., 2013) and yielded superhuman performance in traffic sign classification (Ciresan et al., 2012). At the same time that the scale and accuracy of deep networks has increased, so has the complexity of the tasks that they can solve. Goodfellow et al. (2014d) showed that neural networks could learn to output an entire sequence of characters transcribed from an image, rather than just identifying a single object. Previously, it was widely believed that this kind of learning required labeling of the individual elements of the sequence (G¨ul¸cehre and Bengio, 2013). Since this time, a neural network designed to model sequences, the Long Short-Term Memory or LSTM 21
CHAPTER 1. INTRODUCTION
(Hochreiter and Schmidhuber, 1997), has enjoyed an explosion in popularity. LSTMs and related models are now used to model relationships between sequences and other sequences rather than just fixed inputs. This sequence-to-sequence learning seems to be on the cusp of revolutionizing another application: machine translation (Sutskever et al., 2014a; Bahdanau et al., 2014). This trend of increasing complexity has been pushed to its logical conclusion with the introduction of the Neural Turing Machine (Graves et al., 2014), a neural network that can learn entire programs. This neural network has been shown to be able to learn how to sort lists of numbers given examples of scrambled and sorted sequences. This self-programming technology is in its infancy, but in the future could in principle be applied to nearly any task. Many of these applications of deep learning are highly profitable, given enough data to apply deep learning to. Deep learning is now used by many top technology companies including Google, Microsoft, Facebook, IBM, Baidu, Apple, Adobe, Netflix, NVIDIA and NEC. Deep learning has also made contributions back to other sciences. Modern convolutional networks for object recognition provide a model of visual processing that neuroscientists can study (DiCarlo, 2013). Deep learning also provides useful tools for processing massive amounts of data and making useful predictions in scientific fields. It has been successfully used to predict how molecules will interact in order to help pharmaceutical companies design new drugs (Dahl et al., 2014), to search for subatomic particles (Baldi et al., 2014), and to automatically parse microscope images used to construct a 3-D map of the human brain (KnowlesBarley et al., 2014). We expect deep learning to appear in more and more scientific fields in the future. In summary, deep learning is an approach to machine learning that has drawn heavily on our knowledge of the human brain, statistics and applied math as it developed over the past several decades. In recent years, it has seen tremendous growth in its popularity and usefulness, due in large part to more powerful computers, larger datasets and techniques to train deeper networks. The years ahead are full of challenges and opportunities to improve deep learning even further and bring it to new frontiers.
22
Number of neurons (logarithmic scale)
CHAPTER 1. INTRODUCTION
1011
Increasing neural network size over time
Human
10
10
17
109 108
16
19
20 Octopus
18
14
Frog
107 8
106
11
Bee Ant
105
3
104
Leech
103 10
2
101
13 12 1
6
9 5
100 10 −1 10 −2
Roundworm
2
1950
1985
15
10
4
7
2000
2015
2056
Sponge
Year Figure 1.9: Since the introduction of hidden units, artificial neural networks have doubled in size roughly every 2.4 years. Biological neural network sizes from Wikipedia (2015). 1. Perceptron (Rosenblatt, 1958, 1962) 2. Adaptive Linear Element (Widrow and Hoff, 1960) 3. Neocognitron (Fukushima, 1980) 4. Early backpropagation network (Rumelhart et al., 1986b) 5. Recurrent neural network for speech recognition (Robinson and Fallside, 1991) 6. Multilayer perceptron for speech recognition (Bengio et al., 1991) 7. Mean field sigmoid belief network (Saul et al., 1996) 8. LeNet-5 (LeCun et al., 1998a) 9. Echo state network (Jaeger and Haas, 2004) 10. Deep belief network (Hinton et al., 2006) 11. GPU-accelerated convolutional network (Chellapilla et al., 2006) 12. Deep Boltzmann machines (Salakhutdinov and Hinton, 2009a) 13. GPU-accelerated deep belief network (Raina et al., 2009) 14. Unsupervised convolutional network (Jarrett et al., 2009b) 15. GPU-accelerated multilayer perceptron (Ciresan et al., 2010) 16. OMP-1 network (Coates and Ng, 2011) 17. Distributed autoencoder (Le et al., 2012) 18. Multi-GPU convolutional network (Krizhevsky et al., 2012a) 19. COTS HPC unsupervised convolutional network (Coates et al., 2013) 20. GoogLeNet (Szegedy et al., 2014a)
23
CHAPTER 1. INTRODUCTION
Figure 1.10: Since deep networks reached the scale necessary to compete in the ImageNet Large Scale Visual Recognition, they have consistently won the competition every year, and yielded lower and lower error rates each time. Data from Russakovsky et al. (2014b).
24
Part I
Applied Math and Machine Learning Basics
25
This part of the book introduces the basic mathematical concepts needed to understand deep learning. We begin with general ideas from applied math, that allow us to define functions of many variables, find the highest and lowest points on these functions, and quantify degrees of belief. Next, we describe the fundamental goals of machine learning. We describe how to accomplish these goals by specifying a model that represents certain beliefs, designing a cost function that measures how well those beliefs correspond with reality, and using a training algorithm to minimize that cost function. This elementary framework is the basis for a broad variety of machine learning algorithms, including approaches to machine learning that are not deep. In the subsequent parts of the book, we develop deep learning algorithms within this framework.
26
Chapter 2
Linear Algebra Linear algebra is a branch of mathematics that is widely used throughout science and engineering. However, because linear algebra is a form of continuous rather than discrete mathematics, many computer scientists have little experience with it. A good understanding of linear algebra is essential for understanding and working with many machine learning algorithms, especially deep learning algorithms. We therefore begin the technical content of the book with a focused presentation of the key linear algebra ideas that are most important in deep learning. If you are already familiar with linear algebra, feel free to skip this chapter. If you have previous experience with these concepts but need a detailed reference sheet to review key formulas, we recommend The Matrix Cookbook (Petersen and Pedersen, 2006). If you have no exposure at all to linear algebra, this chapter will teach you enough to read this book, but we highly recommend that you also consult another resource focused exclusively on teaching linear algebra, such as (Shilov, 1977). This chapter will completely omit many important linear algebra topics that are not essential for understanding deep learning.
2.1
Scalars, Vectors, Matrices and Tensors
The study of linear algebra involves several types of mathematical objects: • Scalars: A scalar is just a single number, in contrast to most of the other objects studied in linear algebra, which are usually arrays of multiple numbers. We write scalars in italics. We usually give scalars lower-case variable names. When we introduce them, we specify what kind of number they are. For example, we might say “Let s ∈ R be the slope of the line,” while defining a real-valued scalar, or “Let n ∈ N be the number of units,” while defining a natural number scalar. 27
CHAPTER 2. LINEAR ALGEBRA
• Vectors: A vector is an array of numbers. The numbers have an order to them, and we can identify each individual number by its index in that ordering. Typically we give vectors lower case names written in bold typeface, such as x. The elements of the vector are identified by writing its name in italic typeface, with a subscript. The first element of x is x1 , the second element is x2, and so on. We also need to say what kind of numbers are stored in the vector. If each element is in R, and the vector has n elements, then the vector lies in the set formed by taking the Cartesian product of R n times, denoted as Rn . When we need to explicitly identify the elements of a vector, we write them as a column enclosed in square brackets: x1 x2 x = . . .. xn
We can think of vectors as identifying points in space, with each element giving the coordinate along a different axis. Sometimes we need to index a set of elements of a vector. In this case, we define a set containing the indices, and write the set as a subscript. For example, to access x1 , x3 , and x6 , we define the set S = {1, 3, 6} and write xS . We use the − sign to index the complement of a set. For example x−1 is the vector containing all elements of x except for x1 , and x−S is the vector containing all of the elements of x except for x1 , x 3 , and x 6. • Matrices: A matrix is a 2-D array of numbers, so each element is identified by two indices instead of just one. We usually give matrices upper-case variable names with bold typeface, such as A. If a real-valued matrix A has a height of m and a width of n, then we say that A ∈ Rm×n . We usually identify the elements of a matrix using its name in italic but not bold font, and the indices are listed with separating commas. For example, A 1,1 is the upper left entry of A and Am,n is the bottom right entry. We can identify all of the numbers with vertical coordinate i by writing a “:” for the horizontal coordinate. For example, Ai,: denotes the horizontal cross section of A with vertical coordinate i. This is known as the i-th row of A. Likewise, A:,i is the i-th column of A. When we need to explicitly identify the elements of a matrix, we write them as an array enclosed in square brackets: A1,1 A1,2 . A2,1 A2,2 28
CHAPTER 2. LINEAR ALGEBRA
2
3 & a1,1 a1,2 a a a 1,1 2,1 3,1 > A = 4 a2,1 a2,2 5 ⇒ A = a1,2 a2,2 a3,2 a3,1 a3,2 Figure 2.1: The transpose of the matrix can be thought of as a mirror image across the main diagonal.
Sometimes we may need to index matrix-valued expressions that are not just a single letter. In this case, we use subscripts after the expression, but do not convert anything to lower case. For example, f (A)i,j gives element (i, j) of the matrix computed by applying the function f to A. • Tensors: In some cases we will need an array with more than two axes. In the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor. We denote a tensor named “A” with this typeface: A. We identify the element of A at coordinates (i, j, k) by writing Ai,j,k. By convention, we consider that adding (or subtracting) a scalar and a vector yields a vector with the additive operation performed on each element. The same thing happens with a matrix or a tensor. This is called a broadcasting operation in Python’s numpy library. One important operation on matrices is the transpose. The transpose of a matrix is the mirror image of the matrix across a diagonal line, called the main diagonal, running down and to the right, starting from its upper left corner. See Fig. 2.1 for a graphical depiction of this operation. We denote the transpose of a matrix A as A>, and it is defined such that (A> )i,j = Aj,i . Vectors can be thought of as matrices that contain only one column. The transpose of a vector is therefore a matrix with only one row. Sometimes we define a vector by writing out its elements in the text inline as a row matrix, then using the transpose operator to turn it into a standard column vector, e.g. x = [x1 , x2 , x3 ]>. We can add matrices to each other, as long as they have the same shape, just by adding their corresponding elements: C = A + B where Ci,j = A i,j + Bi,j . We can also add a scalar to a matrix or multiply a matrix by a scalar, just by performing that operation on each element of a matrix: D = a · B + c where D i,j = a · Bi,j + c. 29
CHAPTER 2. LINEAR ALGEBRA
2.2
Multiplying Matrices and Vectors
One of the most important operations involving matrices is multiplication of two matrices. The matrix product of matrices A and B is a third matrix C . In order for this product to be defined, A must have the same number of columns as B has rows. If A is of shape m × n and B is of shape n × p, then C is of shape m × p. We can write the matrix product just by placing two or more matrices together, e.g. C = AB. The product operation is defined by X ci,j = ai,k bk,j . k
Note that the standard product of two matrices is not just a matrix containing the product of the individual elements. Such an operation exists and is called the element-wise product or Hadamard product, and is denoted in this book1 as A B. The dot product between two vectors x and y of the same dimensionality is the matrix product x> y. We can think of the matrix product C = AB as computing ci,j as the dot product between row i of A and column j of B. Matrix product operations have many useful properties that make mathematical analysis of matrices more convenient. For example, matrix multiplication is distributive: A(B + C) = AB + AC. It is also associative: A(BC) = (AB)C. Matrix multiplication is not commutative, unlike scalar multiplication. The transpose of a matrix product also has a simple form: (AB) > = B >A > . Since the focus of this textbook is not linear algebra, we do not attempt to develop a comprehensive list of useful properties of the matrix product here, but the reader should be aware that many more exist. We now know enough linear algebra notation to write down a system of linear equations: Ax = b (2.1) where A ∈ R m×n is a known matrix, b ∈ Rm is a known vector, and x ∈ Rn is a vector of unknown variables we would like to solve for. Each element xi of x is 1 The
element-wise product is used relatively rarely, so the notation for it is not as standardized as the other operations described in this chapter. 30
CHAPTER 2. LINEAR ALGEBRA
1 0 0 0 1 0 0 0 1 Figure 2.2: Example identity matrix: This is I3 .
one of these unknowns to solve for. Each row of A and each element of b provide another constraint. We can rewrite equation 2.1 as: A1,: x = b1 A2,: x = b2 ... A m,:x = bm or, even more explicitly, as: a1,1 x1 + a 1,2 x 2 + · · · + a1,n xn = b1 a2,1 x1 + a 2,2 x 2 + · · · + a2,n xn = b2 ... am,1 x1 + am,2x 2 + · · · + a m,nxn = bm . Matrix-vector product notations provides a more compact representation for equations of this form.
2.3
Identity and Inverse Matrices
Linear algebra offers a powerful tool called matrix inversion that allows us to solve equation 2.1 for many values of A. To describe matrix inversion, we first need to define the concept of an identity matrix. An identity matrix is a matrix that does not change any vector when we multiply that vector by that matrix. We denote the n-dimensional identity matrix as I n. Formally, ∀x ∈ R n , In x = x. The structure of the identity matrix is simple: all of the entries along the main diagonal are 1, while all of the other entries are zero. See Fig. 2.2 for an example. 31
CHAPTER 2. LINEAR ALGEBRA
The matrix inverse of A is denoted as A−1 , and it is defined as the matrix such that −1 A A = I n. We can now solve equation 2.1 by the following steps: Ax = b A −1Ax = A −1b In x = A −1b x = A−1 b. Of course, this depends on it being possible to find A −1. We discuss the conditions for the existence of A −1 in the following section. When A−1 exists, several different algorithms exist for finding it in closed form. In theory, the same inverse matrix can then be used to solve the equation many times for different values of b. However, A −1 is primarily useful as a theoretical tool, and should not actually be used in practice for most software applications. Because A−1 can only be represented with limited precision on a digital computer, algorithms that make use of the value of b can usually obtain more accurate estimates of x.
2.4
Linear Dependence, Span, and Rank
In order for A−1 to exist, equation 2.1 must have exactly one solution for every value of b. However, it is also possible for the system of equations to have no solutions or infinitely many solutions for some values of b. It is not possible to have more than one but less than infinitely many solutions for a particular b; if both x and y are solutions then z = αx + (1 − α)y is also a solution for any real α. To analyze how many solutions the equation has, we can think of the columns of A as specifying different directions we can travel from the origin (the point specified by the vector of all zeros), and determine how many ways there are of reaching b. In this view, each element of x specifies how far we should travel in each of these directions, i.e. x i specifies how far to move in the direction of column i: X Ax = x i A:,i . i
32
CHAPTER 2. LINEAR ALGEBRA
In general, this kind of operation is called a linear combination. Formally, a linear combination of some set of vectors {v (1) , . . . , v(n)} is given by multiplying each vector v (i) by a corresponding scalar coefficient and adding the results: X c i v(i) . i
The span of a set of vectors is the set of all points obtainable by linear combination of the original vectors. Determining whether Ax = b has a solution thus amounts to testing whether b is in the span of the columns of A. This particular span is known as the column space or the range of A. In order for the system Ax = b to have a solution for all values of b ∈ Rm , we therefore require that the column space of A be all of Rm . If any point in R m is excluded from the column space, that point is a potential value of b that has no solution. This implies immediately that A must have at least m columns, i.e., n ≥ m. Otherwise, the dimensionality of the column space must be less than m. For example, consider a 3 × 2 matrix. The target b is 3-D, but x is only 2-D, so modifying the value of x at best allows us to trace out a 2-D plane within R 3. The equation has a solution if and only if b lies on that plane. Having n ≥ m is only a necessary condition for every point to have a solution. It is not a sufficient condition, because it is possible for some of the columns to be redundant. Consider a 2 × 2 matrix where both of the columns are equal to each other. This has the same column space as a 2 × 1 matrix containing only one copy of the replicated column. In other words, the column space is still just a line, and fails to encompass all of R2 , even though there are two columns. Formally, this kind of redundancy is known as linear dependence. A set of vectors is linearly independent if no vector in the set is a linear combination of the other vectors. If we add a vector to a set that is a linear combination of the other vectors in the set, the new vector does not add any points to the set’s span. This means that for the column space of the matrix to encompass all of Rm , the matrix must have at least m linearly independent columns. This condition is both necessary and sufficient for equation 2.1 to have a solution for every value of b. In order for the matrix to have an inverse, we additionally need to ensure that equation 2.1 has at most one solution for each value of b. To do so, we need to ensure that the matrix has at most m columns. Otherwise there is more than one way of parametrizing each solution. Together, this means that the matrix must be square, that is, we require that m = n, and that all of the columns must be linearly independent. A square matrix with linearly dependent columns is known as singular. 33
CHAPTER 2. LINEAR ALGEBRA
If A is not square or is square but singular, it can still be possible to solve the equation. However, we can not use the method of matrix inversion to find the solution. So far we have discussed matrix inverses as being multiplied on the left. It is also possible to define an inverse that is multiplied on the right: AA−1 = I. For square matrices, the left inverse and right inverse are the same.
2.5
Norms
Sometimes we need to measure the size of a vector. In machine learning, we usually measure the size of vectors using an L p norm: !1
p
||x||p =
X i
|xi |
p
for p ∈ R, p ≥ 1. Norms, including the Lp norm, are functions mapping vectors to non-negative values, satisfying these properties that make them behave like distances between points: • f (x) = 0 ⇒ x = 0 • f (x + y) ≤ f(x) + f (y) (the triangle inequality) • ∀α ∈ R, f (αx) = |α|f (x) The L2 norm, with p = 2, is known as the Euclidean norm. It is simply the Euclidean distance from the origin to the point identified by x. This is probably the most common norm used in machine learning. It is also common to measure the size of a vector using the squared L2 norm, which can be calculated simply as x >x. The squared L2 norm is more convenient to work with mathematically and computationally than the L2 norm itself. For example, the derivatives of the squared L2 norm with respect to each element of x each depend only on the corresponding element of x, while all of the derivatives of the L2 norm depend on the entire vector. In many contexts, the squared L2 norm may be undesirable because it increases very slowly near the origin. In several machine learning applications, it is important to discriminate between elements that are exactly zero and elements that are small but nonzero. In these cases, we turn to a 34
CHAPTER 2. LINEAR ALGEBRA
function that grows at the same rate in all locations, but retains mathematical simplicity: the L1 norm. The L 1 norm may be simplified to X ||x||1 = |xi |. i
The L 1 norm is commonly used in machine learning when the difference between zero and nonzero elements is very important. Every time an element of x moves away from 0 by , the L 1 norm increases by . We sometimes measure the size of the vector by counting its number of nonzero elements (and when we use the L 1 norm, we often use it as a proxy for this function). Some authors refer to this function as the “l 0 norm,” but this is incorrect terminology, because scaling the vector by α does not change the number of nonzero entries. One other norm that commonly arises in machine learning is the l∞ norm, also known as the max norm. This norm simplifies to ||x||∞ = max |x i|, i
e.g., the absolute value of the element with the largest magnitude in the vector. Sometimes we may also wish to measure the size of a matrix. In the context of deep learning, the most common way to do this is with the otherwise obscure Frobenius norm sX ||A||F = a 2i,j , i,j
which is analogous to the L 2 norm of a vector. The dot product of two vectors can be rewritten in terms of norms. Specifically, x >y = ||x||2||y||2 cos θ where θ is the angle between x and y.
2.6
Special Kinds of Matrices and Vectors
Some special kinds of matrices and vectors are particularly useful. Diagonal matrices only have non-zero entries along the main diagonal. For= j. We’ve already mally, a matrix D is diagonal if and only if di,j = 0 for all i 6 seen one example of a diagonal matrix: the identity matrix, where all of the diagonal entries are 1. In this book2 , we write diag(v) to denote a square diagonal 2
There is not a standardized notation for constructing a diagonal matrix from a vector. 35
CHAPTER 2. LINEAR ALGEBRA
matrix whose diagonal entries are given by the entries of the vector v. Diagonal matrices are of interest in part because multiplying by a diagonal matrix is very computationally efficient. To compute diag(v)x, we only need to scale each element xi by vi . In other words, diag(v)x = v x. Inverting a square diagonal matrix is also efficient. The inverse exists only if every diagonal entry is nonzero, and in that case, diag(v)−1 = diag([1/v 1, . . . , 1/v n ]>). In many cases, we may derive some very general machine learning algorithm in terms of arbitrary matrices, but obtain a less expensive (and less descriptive) algorithm by restricting some matrices to be diagonal. Note that not all diagonal matrices need be square. It is possible to construct a rectangular diagonal matrix. Non-square diagonal matrices do not have inverses but it is still possible to multiply by them cheaply. For a non-square diagonal matrix D, the product Dx will involving scaling each element of x, and either concatenating some zeros to the result if D is taller than it is wide, or discarding some of the last elements of the vector if D is wider than it is tall. A symmetric matrix is any matrix that is equal to its own transpose: A = A> . Symmetric matrices often arise when the entries are generated by some function of two arguments that does not depend on the order of the arguments. For example, if A is a matrix of distance measurements, with ai,j giving the distance from point i to point j, then ai,j = aj,i because distance functions are symmetric. A unit vector is a vector with unit norm: ||x||2 = 1. A vector x and a vector y are orthogonal to each other if x >y = 0. If both vectors have nonzero norm, this means that they are at 90 degree angles to each other. In R n, at most n vectors may be mutually orthogonal with nonzero norm. If the vectors are not only orthogonal but also have unit norm, we call them orthonormal. An orthogonal matrix is a square matrix whose rows are mutually orthonormal and whose columns are mutually orthonormal: A> A = AA> = I. This implies that A−1 = A> , so orthogonal matrices are of interest because their inverse is very cheap to compute. Pay careful attention to the definition of orthogonal matrices. Counterintuitively, their rows are not merely orthogonal but fully orthonormal. There 36
CHAPTER 2. LINEAR ALGEBRA
is no special term for a matrix whose rows or columns are orthogonal but not orthonormal.
2.7
Eigendecomposition
Many mathematical objects can be understood better by breaking them into constituent parts, or finding some properties of them that are universal, not caused by the way we choose to represent them. For example, integers can be decomposed into prime factors. The way we represent the number 12 will change depending on whether we write it in base ten or in binary, but it will always be true that 12 = 2 × 2 × 3. From this representation we can conclude useful properties, such as that 12 is not divisible by 5, or that any integer multiple of 12 will be divisible by 3. Much as we can discover something about the true nature of an integer by decomposing it into prime factors, we can also decompose matrices in ways that show us information about their functional properties that is not obvious from the representation of the matrix as an array of elements. One of the most widely used kinds of matrix decomposition is called eigendecomposition, in which we decompose a matrix into a set of eigenvectors and eigenvalues. An eigenvector of a square matrix A is a non-zero vector v such that multiplication by A alters only the scale of v: Av = λv. The scalar λ is known as the eigenvalue corresponding to this eigenvector. (One can also find a left eigenvector such that v > A = λv, but we are usually concerned with right eigenvectors). Note that if v is an eigenvector of A, then so is any rescaled vector sv for s ∈ R, s 6 = 0. Moreover, sv still has the same eigenvalue. For this reason, we usually only look for unit eigenvectors. We can represent the matrix A using an eigendecomposition, with eigenvectors (1) {v , . . . , v (n)} and corresponding eigenvalues {λ1 , . . . , λn} by concatenating the eigenvectors into a matrix V = [v (1), . . . , v(n) ], (i.e. one column per eigenvector) and concatenating the eigenvalues into a vector λ. Then the matrix A = V diag(λ)V −1 has the desired eigenvalues and eigenvectors. If we make V an orthogonal matrix, then we can think of A as scaling space by λi in direction v (i). See Fig. 2.3 for an example. 37
CHAPTER 2. LINEAR ALGEBRA
Figure 2.3: An example of the effect of eigenvectors and eigenvalues. Here, we have a matrix A with two orthonormal eigenvectors, v (1) with eigenvalue λ1 and v(2) with eigenvalue λ2 . Left) We plot the set of all unit vectors u ∈ R 2 as a unit circle. Right) We plot the set of all points Au. By observing the way that A distorts the unit circle, we can see that it scales space in direction v(i) by λ i.
38
CHAPTER 2. LINEAR ALGEBRA
We have seen that constructing matrices with specific eigenvalues and eigenvectors allows us to stretch space in desired directions. However, we often want to decompose matrices into their eigenvalues and eigenvectors. Doing so can help us to analyze certain properties of the matrix, much as decomposing an integer into its prime factors can help us understand the behavior of that integer. Not every matrix can be decomposed into eigenvalues and eigenvectors. In some cases, the decomposition exists, but may involve complex rather than real numbers. Fortunately, in this book, we usually need to decompose only a specific class of matrices that have a simple decomposition. Specifically, every real symmetric matrix can be decomposed into an expression using only real-valued eigenvectors and eigenvalues: A = QΛQ> , where Q is an orthogonal matrix composed of eigenvectors of A, and Λ is a diagonal matrix, with λi,i being the eigenvalue corresponding to Q :,i . While any real symmetric matrix A is guaranteed to have an eigendecomposition, the eigendecomposition is not unique. If any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a Q using those eigenvectors instead. By convention, we usually sort the entries of Λ in descending order. Under this convention, the eigendecomposition is unique only if all of the eigenvalues are unique. The eigendecomposition of a matrix tells us many useful facts about the matrix. The matrix is singular if and only if any of the eigenvalues are 0. The eigendecomposition can also be used to optimize quadratic expressions of the form f (x) = x >Ax subject to ||x||2 = 1. Whenever x is equal to an eigenvector of A, f takes on the value of the corresponding eigenvalue. The maximum value of f within the constraint region is the maximum eigenvalue and its minimum value within the constraint region is the minimum eigenvalue. A matrix whose eigenvalues are all positive is called positive definite. A matrix whose eigenvalues are all positive or zero-valued is called positive semidefinite. Likewise, if all eigenvalues are negative, the matrix is negative definite, and if all eigenvalues are negative or zero-valued, it is negative semidefinite. Positive semidefinite matrices are interesting because they guarantee that ∀x, x > Ax ≥ 0. Positive definite matrices additionally guarantee that x> Ax = 0 ⇒ x = 0.
2.8
Singular Value Decomposition
In Sec. 2.7, we saw how to decompose a matrix into eigenvectors and eigenvalues. The singular value decomposition (SVD) provides another way to factorize a matrix, into singular vectors and singular values. The SVD allows us to discover 39
CHAPTER 2. LINEAR ALGEBRA
some of the same kind of information as the eigendecomposition. However, the SVD is more generally applicable. Every real matrix has a singular value decomposition, but the same is not true of the eigenvalue decomposition. For example, if a matrix is not square, the eigendecomposition is not defined, and we must use a singular value decomposition instead. Recall that the eigendecomposition involves analyzing a matrix A to discover a matrix V of eigenvectors and a vector of eigenvalues λ such that we can rewrite A as A = V diag(λ)V −1 . The singular value decomposition is similar, except this time we will write A as a product of three matrices: A = U DV > . Suppose that A is an m ×n matrix. Then U is defined to be an m ×m matrix, D to be an m × n matrix, and V to be an n × n matrix. Each of these matrices is defined to have a special structure. The matrices U and V are both defined to be orthogonal matrices. The matrix D is defined to be a diagonal matrix. Note that D is not necessarily square. The elements along the diagonal of D are known as the singular values of the matrix A. The columns of U are known as the left-singular vectors. The columns of V are known as as the right-singular vectors. We can actually interpret the singular value decomposition of A in terms of the eigendecomposition of functions of A. The left-singular vectors of A are the eigenvectors of AA > . The right-singular vectors of A are the eigenvectors of A >A. The non-zero singular values of A are the square roots of the eigenvalues of A> A. The same is true for AA>. Perhaps the most useful feature of the SVD is that we can use it to partially generalize matrix inversion to non-square matrices, as we will see in the next section.
2.9
The Moore-Penrose Pseudoinverse
Matrix inversion is not defined for matrices that are not square. Suppose we want to make a left-inverse B of a matrix A, so that we can solve a linear equation Ax = y by left-multiplying each side to obtain x = By. 40
CHAPTER 2. LINEAR ALGEBRA
Depending on the structure of the problem, it may not be possible to design a unique mapping from A to B. If A is taller than it is wide, then it is possible for this equation to have no solution. If A is wider than it is tall, then there could be multiple possible solutions. The Moore-Penrose Pseudoinverse allows us to make some headway in these cases. The pseudoinverse of A is defined as a matrix A+ = lim (A> A + αI)−1 A> . α&0
Practical algorithms for computing the pseudoinverse are not based on this definition, but rather the formula A+ = V D+ U >, where U , D, and V are the singular value decomposition of A, and the pseudoinverse D + of a diagonal matrix D is obtained by taking the reciprocal of its non-zero elements then taking the transpose of the resulting matrix. When A has more rows than columns, then solving a linear equation using pseudoinverse provides one of the many possible solutions. Specifically, it provides the solution x = A +y with minimal Euclidean norm ||x||2 among all possible solutions. When A has more columns than rows, it is possible for there to be no solution. In this case, using the pseudoinverse gives us the x for which Ax is as close as possible to y in terms of Euclidean norm ||Ax − y||2.
2.10
The Trace Operator
The trace operator gives the sum of all of the diagonal entries of a matrix: X Tr(A) = ai,i. i
The trace operator is useful for a variety of reasons. Some operations that are difficult to specify without resorting to summation notation can be specified using matrix products and the trace operator. For example, the trace operator provides an alternative way of writing the Frobenius norm of a matrix: q ||A|| F = Tr(A>A). The trace operator also has many useful properties that make it easy to manipulate expressions involving the trace operator. For example, the trace operator is invariant to the transpose operator: Tr(A) = Tr(A> ). 41
CHAPTER 2. LINEAR ALGEBRA
The trace of a square matrix composed of many factors is also invariant to moving the last factor into the first position: Tr(ABC) = Tr(C AB) = Tr(BCA) or more generally, n n−1 Y Y Tr( F (i) ) = Tr(F (n) F (i)). i=1
i=1
Another useful fact to keep in mind is that a scalar is its own trace, i.e. a = T r(a). This can be useful when wishing to manipulate inner products. Let a and b be two column vectors in Rn a>b = Tr(a >b) = Tr(ba >).
2.11
Determinant
The determinant of a square matrix, denoted det(A), is a function mapping matrices to real scalars. The determinant is equal to the product of all the matrix’s eigenvalues. The absolute value of the determinant can be thought of as a measure of how much multiplication by the matrix expands or contracts space. If the determinant is 0, then space is contracted completely along at least one dimension, causing it to lose all of its volume. If the determinant is 1, then the transformation is volume-preserving.
2.12
Example: Principal Components Analysis
One simple machine learning algorithm, principal components analysis (PCA) can be derived using only knowledge of basic linear algebra. Suppose we have a collection of m points {x(1) , . . . , x(m)} in Rn . Suppose we would like to apply lossy compression to these points, i.e. we would like to find a way of storing the points that requires less memory but may lose some precision. We would like to lose as little precision as possible. One way we can encode these points is to represent a lower-dimensional version of them. For each point x (i) ∈ Rn we will find a corresponding code vector c(i) ∈ Rl . If l is smaller than n, it will take less memory to store the code points than the original data. We can use matrix multiplication to map the code back into Rn . Let the reconstruction r(c) = Dc, where D ∈ R n×l is the matrix defining the decoding. To simplify the computation of the optimal encoding, we constrain the columns of D to be orthogonal to each other. (Note that D is still not technically “an orthogonal matrix” unless l = n) 42
CHAPTER 2. LINEAR ALGEBRA
With the problem as described so far, many solutions are possible, because we can increase the scale of D :,i if we decrease ci proportionally for all points. To give the problem a unique solution, we constrain all of the columns of D to have unit norm. In order to turn this basic idea into an algorithm we can implement, the first thing we need to do is figure out how to generate the optimal code point c∗ for each input point x. One way to do this is to minimize the distance between the input point x and its reconstruction, r(c). We can measure this distance using a norm. In the principal components algorithm, we use the L2 norm: c∗ = arg min ||x − r(c)|| 2. c
We can switch to the squared L 2 norm instead of the L2 norm itself, because both are minimized by the same value of c. This is because the L2 norm is nonnegative and the squaring operation is monotonically increasing for non-negative arguments. c∗ = arg min ||x − r(c)||22 c
The function being minimized simplifies to (x − r(c)) > (x − r(c)) (by the definition of the L2 norm) = x >x − x > r(c) − r(c)>x + r(c) >r(c) (by the distributive property) = x>x − 2x > r(c) + r(c)> r(c) (because a scalar is equal to the transpose of itself ). We can now change the function being minimized again, to omit the first term, since this term does not depend on c: c ∗ = arg min −2x>r(c) + r(c) > r(c). c
To make further progress, we must substitute in the definition of r(c): c ∗ = arg min −2x> Dc + c> D > Dc c
= arg min −2x >Dc + c >I lc c
43
CHAPTER 2. LINEAR ALGEBRA
(by the orthogonality and unit norm constraints on D) = arg min −2x>Dc + c>c c
We can solve this optimization problem using vector calculus (see section 4.3 if you do not know how to do this): > > ∇c(−2x Dc + c c) = 0
−2D> x + 2c = 0 c = D>x.
(2.2)
This is good news: we can optimally encode x just using a vector-product operation. Next, we need to choose the encoding matrix D. To do so, we revisit the idea of minimizing the L2 distance between inputs and reconstructions. However, since we will use the same matrix D to decode all of the points, we can no longer consider the points in isolation. Instead, we must minimize the Frobenius norm of the matrix of errors computed over all dimensions and all points: sX (i) (i) ∗ D = arg min (x j − r j )2 subject to D > D = Il (2.3) D
i,j
To derive the algorithm for finding D∗ , we will start by considering the case where l = 1. In this case, D is just a single vector, d. Substituting equation 2.2 into equation 2.3 and simplifying D into d, the problem reduces to d ∗ = arg min d
X i
||x (i) − x(i)> dd||22 subject to ||d|| 2 = 1.
At this point, it can be helpful to rewrite the problem in terms of matrices. This will allow us to use more compact notation. Let X ∈ Rm×n be the matrix > defined by stacking all of the vectors describing the points, such that Xi,: = x (i) . We can now rewrite the problem as d∗ = arg min ||X − X dd> ||2F subject to d >d = 1 d
Disregarding the constraint for the moment, we can simplify the Frobenius norm portion as follows: arg min ||X − X dd> ||2F d
= arg min Tr X − X dd > d
44
>
X − X dd>
CHAPTER 2. LINEAR ALGEBRA
(by the alternate definition of the Frobenius norm) = arg min Tr(X > X − X> Xdd > − dd >X >X + dd> X> Xdd> ) d
= arg min Tr(X>X) − Tr(X >Xdd > ) − Tr(dd>X >X) + Tr(dd >X >Xdd >) d
= arg min − Tr(X >Xdd> ) − Tr(dd >X >X) + Tr(dd >X >Xdd> ) d
(because terms not involving d do not affect the arg min) = arg min −2 Tr(X >Xdd > ) + Tr(dd>X >Xdd> ) d
(because the trace is invariant to transpose) = arg min −2 Tr(X >Xdd > ) + Tr(X> Xdd> dd> ) d
(because we can cycle the order of the matrices inside a trace) At this point, we re-introduce the constraint: arg min −2 Tr(X > Xdd>) + Tr(X > Xdd> dd>) subject to d >d = 1 d
= arg min −2 Tr(X >Xdd >) + Tr(X >Xdd >) subject to d >d = 1 d
(due to the constraint) = arg min − Tr(X>Xdd >) subject to d > d = 1 d
= arg max Tr(X> Xdd> ) subject to d> d = 1 d
= arg max Tr(d>X >Xd) subject to d> d = 1 d
This optimization problem may be solved using eigendecomposition. Specifically, the optimal d is given by the eigenvector of X> X corresponding to the largest eigenvalue. In the general case, where l > 1, D is given by the l eigenvectors corresponding to the largest eigenvalues. This may be shown using proof by induction. We recommend writing this proof as an exercise.
45
Chapter 3
Probability and Information Theory In this chapter, we describe probability theory. Probability theory is a mathematical framework for representing uncertain statements. It provides a means of quantifying uncertainty and axioms for deriving new uncertain statements. In artificial intelligence applications, we use probability theory in two major ways. First, the laws of probability tell us how AI systems should reason, so we design our algorithms to compute or approximate various expressions derived using probability theory. Second, we can use probability and statistics to theoretically analyze the behavior of proposed AI systems. Probability theory is a fundamental tool of many disciplines of science and engineering. We provide this chapter to ensure that readers whose background is primarily in software engineering with limited exposure to probability theory can understand the material in this book. If you are already familiar with probability theory, feel free to skip this chapter. If you have absolutely no prior experience with probability, this chapter should be sufficient to successfully carry out deep learning research projects, but we do suggest that you consult an additional resource, such as (Jaynes, 2003).
3.1
Why Probability?
Many branches of computer science deal mostly with entities that are entirely deterministic and certain. A programmer can usually safely assume that a CPU will execute each machine instruction flawlessly. Errors in hardware do occur, but are rare enough that most software applications do not need to be designed to account for them. Given that many computer scientists and software engineers work in a relatively clean and certain environment, it can be surprising that 46
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
machine learning makes heavy use of probability theory. This is because machine learning must always deal with uncertain quantities, and sometimes may also need to deal with stochastic quantities. Uncertainty and stochasticity can arise from many sources. Researchers have made compelling arguments for quantifying uncertainty using probability since at least the 1980s. Many of the arguments presented here are summarized from or inspired by (Pearl, 1988). Much earlier work in probability and engineering introduced and developed the underlying fundamental notions, such as the notion of exchangeability (de Finetti, 1937), Cox’s theorem as the foundations of Bayesian inference (Cox, 1946), and the theory of stochastic processes (Doob, 1953). Nearly all activities require some ability to reason in the presence of uncertainty. In fact, beyond mathematical statements that are true by definition, it is difficult to think of any proposition that is absolutely true or any event that is absolutely guaranteed to occur. One source of uncertainty is incomplete observability. When we cannot observe something, we are uncertain about its true nature. In machine learning, it is often the case that we can observe a large amount of data, but there is not a data instance for every situation we care about. We are also generally not able to observe directly what process generates the data. Since we are uncertain about what process generates the data, we are also uncertain about what happens in the situations for which we have not observed data points. Lack of observability can also give rise to apparent stochasticity. Deterministic systems can appear stochastic when we cannot observe all of the variables that drive the behavior of the system. For example, consider a game of Russian roulette. The outcome is deterministic if you know which chamber of the revolver is loaded. If you do not know this important information, then it is a game of chance. In many cases, we are able to observe some quantity, but our measurement is itself uncertain. For example, laser range finders may have several centimeters of random error. Uncertainty can also arise from the simplifications we make in order to model real-world processes. For example, if we discretize space, then we immediately become uncertain about the precise position of objects: each object could be anywhere within the discrete cell that we know it occupies. Conceivably, the universe itself could have stochastic dynamics, but we make no claim on this subject. In many cases, it is more practical to use a simple but uncertain rule rather than a complex but certain one, even if our modeling system has the fidelity to accommodate a complex rule. For example, the simple rule “Most birds fly” is cheap to develop and is broadly useful, while a rule of the form, “Birds fly, except for very young birds that have not yet learned to fly, sick or injured birds that have lost the ability to fly, flightless species of birds including the cassowary, ostrich, 47
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
and kiwi. . . ” is expensive to develop, maintain, and communicate, and after all of this effort is still very brittle and prone to failure. Given that we need a means of representing and reasoning about uncertainty, it is not immediately obvious that probability theory can provide all of the tools we want for artificial intelligence applications. Probability theory was originally developed to analyze the frequencies of events. It is easy to see how probability theory can be used to study events like drawing a certain hand of cards in a game of poker. These kinds of events are often repeatable, and when we say that an outcome has a probability p of occurring, it means that if we repeated the experiment (e.g., draw a hand of cards) infinitely many times, then proportion p of the repetitions would result in that outcome. This kind of reasoning does not seem immediately applicable to propositions that are not repeatable. If a doctor analyzes a patient and says that the patient has a 40% chance of having the flu, this means something very different—we can not make infinitely many replicas of the patient, nor is there any reason to believe that different replicas of the patient would present with the same symptoms yet have varying underlying conditions. In the case of the doctor diagnosing the patient, we use probability to represent a degree of belief, with 1 indicating absolute certainty, and 0 indicating absolute uncertainty. The former kind of probability, related directly to the rates at which events occur, is known as frequentist probability, while the latter, related to qualitative levels of certainty, is known as Bayesian probability. It turns out that if we list several properties that we expect common sense reasoning about uncertainty to have, then the only way to satisfy those properties is to treat Bayesian probabilities as behaving exactly the same as frequentist probabilities. For example, if we want to compute the probability that a player will win a poker game given that she has a certain set of cards, we use exactly the same formulas as when we compute the probability that a patient has a disease given that she has certain symptoms. For more details about why a small set of common sense assumptions implies that the same axioms must control both kinds of probability, see (Ramsey, 1926). Probability can be seen as the extension of logic to deal with uncertainty. Logic provides a set of formal rules for determining what propositions are implied to be true or false given the assumption that some other set of propositions is true or false. Probability theory provides a set of formal rules for determining the likelihood of a proposition being true given the likelihood of other propositions.
3.2
Random Variables
A random variable is a variable that can take on different values randomly. We typically denote the random variable itself with a lower case letter in plain type48
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
face, and the values it can take on with lower case script letters. For example, x1 and x2 are both possible values that the random variable x can take on. For vector-valued variables, we would write the random variable as x and one of its values as x. On its own, a random variable is just a description of the states that are possible; it must be coupled with a probability distribution that specifies how likely each of these states are. Random variables may be discrete or continuous. A discrete random variable is one that has a finite or countably infinite number of states. Note that these states are not necessarily the integers; they can also just be named states that are not considered to have any numerical value. A continuous random variable is associated with a real value.
3.3
Probability Distributions
A probability distribution is a description of how likely a random variable or set of random variables is to take on each of its possible states. The way we describe probability distributions depends on whether the variables are discrete or continuous.
3.3.1
Discrete Variables and Probability Mass Functions
A probability distribution over discrete variables may be described using a probability mass function (PMF). We typically denote probability mass functions with a capital P . Often we associate each random variable with a different probability mass function and the reader must infer which probability mass function to use based on the identity of the random variable, rather than the name of the function; P (x) is usually not the same as P (y). The probability mass function maps from a state of a random variable to the probability of that random variable taking on that state. P (x) denotes the probability that x = x, with a probability of 1 indicating that x = x is certain and a probability of 0 indicating that x = x is impossible. Sometimes to disambiguate which PMF to use, we write the name of the random variable explicitly: P (x = x). Sometimes we define a variable first, then use ∼ notation to specify which distribution it follows later: x ∼ P (x). Probability mass functions can act on many variables at the same time. Such a probability distribution over many variables is known as a joint probability distribution. P (x = x, y = y) denotes the probability that x = x and y = y simultaneously. We may also write P (x, y) for brevity. To be a probability mass function on a set of random variables x, a function f must meet the following properties: 49
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
• The domain of f must be the set of all possible states of x. • The range of f must be a subset of the real interval [0, 1]. (No state can be more likely than a guaranteed event or less likely than an impossible event.) P • x∈x f (x) = 1. (f must guarantee that some state occurs.)
For example, consider a single discrete random variable x with k different states. We can place a uniform distribution on x — that is, make each of its states equally likely, by setting its probability mass function to P (x = xi ) =
1 k
for all i. We can see that this fits the requirements for a probability massP function. 1 The value k is positive because k is a positive integer. We also see that i P (x = P xi ) = i 1k = kk = 1, so the distribution is properly normalized.
3.3.2
Continuous Variables and Probability Density Functions
When working with continuous random variables, we describe probability distributions using a probability density function (PDF) rather than a probability mass function. A probability density function must satisfy the following properties: • It must map from the domain of the random variable whose distribution it describes to the real numbers. • ∀x, p(x) ≥ 0. Note that we do not require p(x) ≤ 1. R • p(x)dx = 1.
A probability density function does not give the probability of a specific state directly, instead the probability of landing inside an infinitesimal region with volume δx is given by p(x)δx. We can integrate the density function to find the actual probability mass of a set of points. Specifically, the probability that x lies in some set S is given by the integral of p(x) over that set. In the R univariate example, the probability that x lies in the interval [a, b] is given by [a,b] p(x)dx. For an example of a probability density function corresponding to a specific probability density over a continuous random variable, consider a uniform distribution on an interval of the real numbers. We can do this with a function u(x; a, b), where a and b are the endpoints of the interval, with b > a. (The “;” notation means “parametrized by”; we consider x to be the argument of the function, while a and b are parameters that define the function) To ensure that 50
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
there is no probability mass outside the interval, we say u(x; a, b) = 0 for all 1 x 6 ∈ [a, b]. Within [a, b], u(x; a, b) = b−a . We can see that this is nonnegative everywhere. Additionally, it integrates to 1. We often denote that x follows the uniform distribution on [a, b] by writing x ∼ U (a, b).
3.4
Marginal Probability
Sometimes we know the probability distribution over a set of variables, and we want to know the probability distribution over just a subset of them. The probability distribution over the subset is known as the marginal probability. For example, suppose we have discrete random variables x and y, and we know P (x, y). We can find P (x) with the sum rule: X ∀x ∈ x, P (x = x) = P (x = x, y = y). y
The name “marginal probability” comes from the process of computing marginal probabilities on paper. When the values of P (x, y) are written in a grid with different values of x in rows and different values of y in columns, it is natural to sum across a row of the grid, then write P (x) in the margin of the paper just to the right of the row. For continuous variables, we need to use integration instead of summation: Z p(x) = p(x, y)dy.
3.5
Conditional Probability
In many cases, we are interested in the probability of some event, given that some other event has happened. This is called a conditional probability. We denote the conditional probability that y = y given x = x as P (y = y | x = x). This conditional probability can be computed with the formula P (y = y | x = x) = P (y = y, x = x)/P (x = x). Note that this is only defined when P (x = x) > 0. We cannot compute the conditional probability conditioned on an event that never happens. It is important not to confuse conditional probability with computing what would happen if some action were undertaken. The conditional probability that a person is from Germany given that they speak German is quite high, but if a randomly selected person is taught to speak German, their country of origin does not change. Computing the consequences of an action is called making an intervention query. Intervention queries are the domain of causal modeling, which we do not explore in this book. 51
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
3.6
The Chain Rule of Conditional Probabilities
Any joint probability distribution over many random variables may be decomposed into conditional distributions over only one variable: P (x (1), . . . , x(n) ) = P (x(1) )Πni=2P (x(i) | x(1) , . . . , x(i−1) ). This observation is known as the chain rule or product rule of probability. It follows immediately from the definition of conditional probability. For example, applying the definition twice, we get P (a, b, c) = P (a | b, c)P (b, c) P (b, c) = P (b | c)P (c)
P (a, b, c) = P (a | b, c)P (b | c)P (c). Note how every statement about probabilities remains true if we add conditions (stuff on the right-hand side of the vertical bar) consistently on all the “P”’s in the statement. We can use this to derive the same thing differently: P (a, b | c) = P (a | b, c)P (b | c) P (a, b, c) = P (a, b | c)P (c) = P (a | b, c)P (b | c)P (c).
3.7
Independence and Conditional Independence
Two random variables x and y are independent if their probability distribution can be expressed as a product of two factors, one involving only x and one involving only y: ∀x ∈ x, y ∈ y, p(x = x, y = y) = p(x = x)p(y = y). Two random variables x and y are conditionally independent given a random variable z if the conditional probability distribution over x and y factorizes in this way for every value of z: ∀x ∈ x, y ∈ y, z ∈ z, p(x = x, y = y | z = z) = p(x = x | z = z)p(y = y | z = z). We can denote independence and conditional independence with compact notation: x⊥y means that x and y are independent, while x⊥y | z means that x and y are conditionally independent given z.
52
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
3.8
Expectation, Variance, and Covariance
The expectation or expected value of some function f(x) with respect to a probability distribution P (x) is the average or mean value that f takes on when x is drawn from P . For discrete variables this can be computed with a summation: X P (x)f (x), E x∼P [f (x)] = x
while for continuous variables, it is computed with an integral: Z Ex∼P [f (x)] = p(x)f (x)dx. When the identity of the distribution is clear from the context, we may simply write the name of the random variable that the expectation is over, e.g. Ex [f (x)]. If it is clear which random variable the expectation is over, we may omit the subscript entirely, e.g. E[f (x)]. By default, we can assume that E[·] averages over the values of all the random variables inside the brackets. Likewise, when there is no ambiguity, we may omit the square brackets. Expectations are linear, for example, E[αf (x) +βg(x)] = αE[f (x)] +βE[g(x)], when α and β are fixed (not random and not depending on x). The variance gives a measure of how much the different values of a function are spread apart: h i Var(f(x)) = E (f(x) − E[f(x)])2 . When the variance is low, the values of f(x) cluster near their expected value. The square root of the variance is known as the standard deviation. The covariance gives some sense of how much two values are linearly related to each other, as well as the scale of these variables: Cov(f (x), g(y)) = E [(f(x) − E [f(x)]) (g(y) − E [g(y)])] . High absolute values of the covariance mean that the values change a lot and are both far from their respective means at the same time. If the sign of the covariance is positive, then the values tend to change in the same direction, while if it is negative, they tend to change in opposite directions. Other measures such as correlation normalize the contribution of each variable in order to measure only how much the variables are related, rather than also being affected by the scale of the separate variables. The notions of covariance and dependence are conceptually related, but are in fact distinct concepts. Two random variables that have non-zero covariance are dependent. However, they may have zero covariance without being independent. 53
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
For example, suppose we first generate x, then generate s ∈ {−1, 1} with each state having probability 0.5, then generate y as s(x − E[x]). Clearly, x and y are not independent, because y only has two possible values given x. However, Cov(x, y) = 0. The covariance matrix of a random vector x ∈ Rn is an n × n matrix, such that Cov(x) i,j = Cov(xi , xj ). Note that the diagonal elements give Cov(x i, x i) = Var(x i).
3.9
Information Theory
Information theory is a branch of applied mathematics that revolves around quantifying how much information is present in a signal. It was originally invented to study sending messages from discrete alphabets over a noisy channel, such as communication via radio transmission. In this context, information theory tells how to design optimal codes and calculate the expected length of messages sampled from specific probability distributions using various encoding schemes. In the context of machine learning, we can also apply information theory to continuous variables where some of these message length interpretations do not apply. This field is fundamental to many areas of electrical engineering and computer science. In this textbook, we mostly use a few key ideas from information theory to characterize probability distributions or quantify similarity between probability distributions. For more detail on information theory, see (Cover and Thomas, 2006; MacKay, 2003). The basic intuition behind information theory is that learning that an unlikely event has occurred is more informative than learning that a likely event has occurred. A message saying “the sun rose this morning” is so uninformative as to be unnecessary to send, but a message saying “there was a solar eclipse this morning” is very informative. We would like to quantify information in a way that formalizes this intuition. Specifically, • Likely events should have low information content, and in the extreme case, events that are guaranteed to happen should have no information content whatsoever. • Less likely events should have higher information content. • Independent events should have additive information. For example, finding out that a tossed coin has come up as heads twice should convey twice as 54
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
much information as finding out that a tossed coin has come up as heads once. In order to satisfy all three of these properties, we define the self-information of an event x = x to be I(x) = − log P (x). (3.1) In this book, we always use log to mean the natural logarithm, with base e. Our definition of I(x) is therefore written in units of nats. One nat is the amount of information gained by observing an event of probability 1e . Other texts use base-2 logarithms and units called bits or shannons; information measured in bits is just a rescaling of information measured in nats. When x is continuous, we use the same definition of information by analogy, but some of the properties from the discrete case are lost. For example, an event with unit density still has zero information, despite not being an event that is guaranteed to occur. Self-information deals only with a single outcome. We can quantify the amount of uncertainty in an entire probability distribution using the Shannon entropy 1: H(x) = E x∼P[I(x)] = −E x∼P[log P(x)]. (3.2) also denoted H(P ). In other words, the Shannon entropy of a distribution is the expected amount of information in an event drawn from that distribution. It actually gives a lower bound on the number of bits (if the logarithm is base 2, otherwise the units are different) needed in average to encode symbols drawn from a distribution P. Distributions that are nearly deterministic (where the outcome is nearly certain) have low entropy; distributions that are closer to uniform have high entropy. See Fig. 3.1 for a demonstration. When x is continous, the Shannon entropy is known as the differential entropy. If we have two separate probability distributions P (x) and Q(x) over the same random variable x, we can measure how different these two distributions are using the Kullback-Leibler (KL) divergence: Q(x) D KL(QkP ) = E x∼Q log . (3.3) P (x) In the case of discrete variables, it is the extra amount of information needed to send a message containing symbols drawn with probability Q, when we incorrectly believe that they were drawn with probability P , i.e., it measures how much it hurts to use P as a model when Q is the ground truth. The KL divergence has 1
Shannon entropy is named for Claude Shannon, the father of information theory (Shannon, 1948, 1949). For an interesting biographical account of Shannon and some of his contemporaries, see Fortune’s Formula by William Poundstone (Poundstone, 2005). 55
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
Figure 3.1: This plot shows how distributions that are closer to deterministic have low Shannon entropy while distributions that are close to uniform have high Shannon entropy. On the horizontal axis, we plot p, the probability of a binary random variable being equal to 1. When p is near 0, the distribution is nearly deterministic, because the random variable is nearly always 0. When p is near 1, the distribution is nearly deterministic, because the random variable is nearly always 1. When p = 0.5, the entropy is maximal, because the distribution is uniform over the two outcomes.
56
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
many useful properties, most notably that it is non-negative. The KL divergence is 0 if and only if P and Q are the same distribution in the case of discrete variables, or equal “almost everywhere” in the case of continuous variables (see section 3.13 for details). Because the KL divergence is non-negative and measures the difference between two distributions, it is often conceptualized as measuring some sort of distance between these distributions. However, it is not a true = DKL (QkP ) for distance measure because it is not symmetric, i.e. D KL(P kQ) 6 some P and Q. When computing many of these quantities, it is common to encounter expressions of the form 0 log 0. By convention, in the context of information theory, we treat these expressions as limx→0 x log x = 0.
3.10
Common Probability Distributions
Several simple probability distributions are useful in many contexts in machine learning.
3.10.1
Bernoulli Distribution
The Bernoulli distribution is a distribution over a single binary random variable. It is controlled by a single parameter φ ∈ [0, 1], which gives the probability of the random variable being equal to 1. It has the following properties: P (x = 1) = φ P (x = 0) = 1 − φ
P (x = x) = φx (1 − φ) 1−x Ex [x] = φ Varx (x) = φ(1 − φ) H(x) = (φ − 1) log(1 − φ) − φ log φ.
3.10.2
Multinoulli Distribution
The multinoulli or categorical distribution is a distribution over a single discrete variable with k different states, where k is finite2 . The multinoulli distribution 2
“Multinoulli” is a recently coined term. The multinoulli distribution is a special case of the multinomial distribution. A multinomial distribution is the distribution over vectors in {0, . . . , k}n representing how many times each of the k categories is visited when n samples are drawn from a multinoulli distribution. Many texts use the term “multinomial” to refer to multinoulli distributions without clarifying that they refer only to the n = 1 case. 57
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
is parametrized by a vector p ∈ [0, 1]k−1 , where pi gives the probability of the i-th state. The final, k-th state’s probability is given by 1 − 1 > p. Note that we must constrain 1> p ≤ 1. Multinoulli distributions are often used to refer to distributions over categories of objects, so we do not usually assume that state 1 has numerical value 1, etc. For this reason, we do not usually need to compute the expectation or variance of multinoulli-distributed random variables. The Bernoulli and multinoulli distributions are sufficient to describe any distribution over their domain. This is because they model discrete variables for which it is feasible to simply enumerate all of the states. When dealing with continuous variables, there are uncountably many states, so any distribution described by a small number of parameters must impose strict limits on the distribution.
3.10.3
Gaussian Distribution
The most commonly used distribution over real numbers is the normal distribution, also known as the Gaussian distribution: r 1 1 2 2 N (x | µ, σ ) = exp − 2(x − µ) . 2πσ2 2σ See Fig. 3.2 for a schematic. The two parameters µ ∈ R and σ ∈ R+ control the normal distribution. µ gives the coordinate of the central peak. This is also the mean of the distribution, i.e. E[x] = µ. The standard deviation of the distribution is given by σ, i.e. Var(x) = σ2 . Note that when we evaluate the PDF, we need to square and invert σ. When we need to frequently evaluate the PDF with different parameter values, a more efficient way of parametrizing the distribution is to use a parameter β ∈ R+ to control the precision or inverse variance of the distribution: r β 1 −1 2 N (x | µ, β ) = exp − β(x − µ) . 2π 2 Normal distributions are a sensible choice for many applications. In the absence of prior knowledge about what form a distribution over the real numbers should take, the normal distribution is a good default choice for two major reasons. First, many distributions we wish to model are truly close to being normal distributions. The central limit theorem shows that the sum of many independent random variables is approximately normally distributed. This means that in practice, many complicated systems can be modeled successfully as normally distributed noise, even if the system can be decomposed into parts with more structured behavior. 58
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
Figure 3.2: The normal distribution: The normal distribution N (x | µ, σ 2 ) exhibits a classic “bell curve” shape, with the x coordinate of its central peak given by µ, and the width of its peak controlled by σ. In this example, we depict the standard normal distribution, with µ = 0 and σ = 1.
59
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
Second, the normal distribution in some sense makes the fewest assumptions of any distribution over the reals, so choosing to use it inserts the least amount of prior knowledge into a model. Out of all distributions with the same variance, the normal distribution has the highest entropy. It is not possible to place a uniform distribution on all of R. The closest we can come to doing so is to use a normal distribution with high variance. The normal distribution generalizes to R n, in which case it is known as the multivariate normal distribution. It may be parametrized with a positive definite symmetric matrix Σ: s 1 1 > −1 exp − (x − µ) Σ (x − µ) . N (x | µ, Σ) = (2π) n det(Σ) 2 The parameter µ still gives the mean of the distribution, though now it is vector-valued. The parameter Σ gives the covariance matrix of the distribution. As in the univariate case, the covariance is not necessarily the most computationally efficient way to parametrize the distribution, since we need to invert Σ to evaluate the PDF. We can instead use a precision matrix β: s det(β) 1 −1 > exp − (x − µ) β(x − µ) . N (x | µ, β ) = (2π) n 2
3.10.4
Dirac Distribution
In some cases, we wish to specify that all of the mass in a probability distribution clusters around a single point. This can be accomplished by defining a PDF using the Dirac delta function, δ(x): p(x) = δ(x − µ). The Dirac delta function is defined such that it is zero-valued everywhere but 0, yet integrates to 1. By defining p(x) to be δ shifted by −µ we obtain an infinitely narrow and infinitely high peak of probability mass where x = µ. A common use of the Dirac delta distribution is as a component of the so-called empirical distribution, n 1X δ(x − xi ) pˆ(x) = (3.4) n i=1
which puts probability mass 1n on each of the n points x 1, . . . xn forming a given data set or collection of samples. The Dirac delta distribution is only necessary to define the empirical distribution over continuous variables. For discrete variables, the situation is simpler: an empirical distribution can be conceptualized as a 60
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
multinoulli distribution, with a probability associated to each possible input value that is simply equal to the empirical frequency of that value in the training set. We can view the empirical distribution formed from a dataset of training examples as specifying the distribution that we sample from when we train a model on this dataset. Another important perspective on the empirical distribution is that it is the probability density that maximizes the likelihood of the training data (see Section 5.8). Many machine learning algorithms can be configured to have arbitrarily high capacity. If given enough capacity, these algorithms will simply learn the empirical distribution. This is a bad outcome because the model does not generalize at all and assigns infinitesimal probability to any point in space that did not occur in the training set. A central problem in machine learning is studying how to limit the capacity of a model in a way that prevents it from simply learning the empirical distribution while also allowing it to learn complicated functions. The empirical distribution is a particular form of mixture, discussed next.
3.10.5
Mixtures of Distributions and Gaussian Mixture
It is also common to define probability distributions by composing other simpler probability distributions. One common way of combining distributions is to construct a mixture distribution. A mixture distribution is made up of several component distributions. On each trial, the choice of which component distribution generates the sample is determined by sampling a component identity from a multinoulli distribution: X P (x) = P (c = i)P (x | c = i) i
where P (c) is the multinoulli distribution over component identities. In chapter 13, we explore the art of building complex probability distributions from simple ones in more detail. Note that we can think of the variable c as a non-observed (or latent) random variable that is related to x through their joint distribution P (x, c) = P (x | c)P (c). Latent variables are discussed further in Section 13.4.2. A very powerful and common type of mixture model is the Gaussian mixture model, in which the components P (x | c = i) are Gaussians, each with its mean µi and covariance Σi . Some mixtures can have more constraints, for example, the covariances could be shared across components, i.e., Σi = Σj = Σ, or the covariance matrices could be constrained to be diagonal or simply equal to a scalar times the identity. A Gaussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated to a particular precision by a Gaussian mixture model with enough components. Gaussian mixture models 61
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
have been used in many settings, and are particularly well known for their use as acoustic models in speech recognition (Bahl et al., 1987).
3.11
Useful Properties of Common Functions
Certain functions arise often while working with probability distributions, especially the probability distributions used in deep learning models. One of these functions is the logistic sigmoid: σ(x) =
1 . 1 + exp(−x)
The logistic sigmoid is commonly used to produce the φ parameter of a Bernoulli distribution because its range is (0, 1), which lies within the valid range of values for the φ parameter. See Fig. 3.3 for a graph of the sigmoid function.
Figure 3.3: The logistic sigmoid function.
Another commonly encountered function is the softplus function (Dugas et al., 2001): ζ(x) = log (1 + exp(x)) . The softplus function can be useful for producing the β or σ parameter of a normal distribution because its range is R+ . It also arises commonly when manipulating expressions involving sigmoids, as it is the primitive of the sigmoid, i.e., the integral from −∞ to x of the sigmoid. The name of the softplus function comes from the fact that it is a smoothed or “softened” version of x+ = max(0, x). 62
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
Figure 3.4: The softplus function.
See Fig. 3.4 for a graph of the softplus function. The following properties are all useful enough that you may wish to memorize them: σ(x) =
exp(x) exp(x) + exp(0)
d σ(x) = σ(x)(1 − σ(x)) dx 1 − σ(x) = σ(−x) log σ(x) = −ζ(−x) d ζ(x) = σ(x) dx
∀x ∈ (0, 1), σ
−1
(x) = log
x 1−x
∀x > 0, ζ−1 (x) = log (exp(x) − 1) ζ(x) − ζ(−x) = x
The function σ −1(x) is called the logit in statistics, but this term is more rarely used in machine learning. The final property provides extra justification for the name “softplus”, since x+ − x− = x.
63
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
3.12
Bayes’ Rule
We often find ourselves in a situation where we know P (y | x) and need to know P (x | y). Fortunately, if we also know P (x), we can compute the desired quantity using Bayes’ rule: P (x)P (y | x) P (x | y) = . P (y) Note that P while P (y) appears in the formula, it is usually feasible to compute P (y) = x P (y | x)P (x), so we do not need to begin with knowledge of P (y). Bayes’ rule is straightforward to derive from the definition of conditional probability, but it is useful to know the name of this formula since many texts refer to it by name. It is named after the Reverend Thomas Bayes, who first discovered a special case of the formula. The general version presented here was independently discovered by Pierre-Simon Laplace.
3.13
Technical Details of Continuous Variables
A proper formal understanding of continuous random variables and probability density functions requires developing probability theory in terms of a branch of mathematics known as measure theory. Measure theory is beyond the scope of this textbook, but we can briefly sketch some of the issues that measure theory is employed to resolve. In section 3.3.2, we saw that the probability of x lying in some set S is given by the integral of p(x) over the set S. Some choices of set S can produce paradoxes. For example, it is possible to construct two sets S 1 and S 2 such that P(S 1 ) + P (S2 ) > 1 but S1 ∩ S2 = ∅. These sets are generally constructed making very heavy use of the infinite precision of real numbers, for example by making fractal-shaped sets or sets that are defined by transforming the set of rational numbers3 . One of the key contributions of measure theory is to provide a characterization of the set of sets that we can compute the probability of without encountering paradoxes. In this book, we only integrate over sets with relatively simple descriptions, so this aspect of measure theory never becomes a relevant concern. For our purposes, measure theory is more useful for describing theorems that apply to most points in Rn but do not apply to some corner cases. Measure theory provides a rigorous way of describing that a set of points is negligibly small. Such a set is said to have “measure zero”. We do not formally define this concept in this textbook. However, it is useful to understand the intuition that a set of measure zero occupies no volume in the space we are measuring. For example, within R2, 3
The Banach-Tarski theorem provides a fun example of such sets. 64
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
a line has measure zero, while a filled polygon has positive measure. Likewise, an individual point has measure zero. Any union of countably many sets that each have measure zero also has measure zero (so the set of all the rational numbers has measure zero, for instance). Another useful term from measure theory is “almost everywhere”. A property that holds almost everywhere holds throughout all of space except for on a set of measure zero. Because the exceptions occupy a negligible amount of space, they can be safely ignored for many applications. Some important results in probability theory hold for all discrete values but only hold “almost everywhere” for continuous values. One other detail we must be aware of relates to handling random variables that are deterministic functions of one another. Suppose we have two random variables, x and y, such that y = g(x). You might think that py (y) = px(g −1 (y)). This is actually not the case. Suppose y = x2 and x ∼ U(0, 1). If we use the rule p y (y) = px (2y) then p y will be 0 everywhere except the interval [0, 12 ], and it will be 1 on this interval. This means Z 1 py (y)dy = 2 which violates the definition of a probability distribution. This common mistake is wrong because it fails to account for the distortion of space introduced by the function g(x). Recall that the probability of x lying in an infinitesimally small region with volume δx is given by p(x)δx. Since g can expand or contract space, the infinitesimal volume surrounding x in x space may have different volume in y space. To correct the problem, we need to preserve the property |py (g(x))dy| = |px (x)dx|. Solving from this, we obtain
∂x py (y) = px (g −1 (y))| | ∂y or equivalently ∂g(x) |. (3.5) ∂x In higher dimensions, the absolute value of the derivative generalizes to the dei terminant of the Jacobian matrix — the matrix with Ji,j = ∂x ∂y j . p x(x) = p y (g(x))|
3.14
Structured Probabilistic Models
Machine learning algorithms often involve probability distributions over a very large number of random variables. Often, these probability distributions involve 65
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
direct interactions between relatively few variables. Using a single function to describe the entire joint probability distribution can be very inefficient (both computationally and statistically). Instead of using a single function to represent a probability distribution, we can split a probability distribution into many factors that we multiply together. For example, suppose we have three random variables, a, b, and c. Suppose that a influences the value of b and b influences the value of c, but that a and c are independent given b. We can represent the probability distribution over all three variables as a product of probability distributions over two variables: p(a, b, c) = p(a)p(b | a)p(c | a). These factorizations can greatly reduce the number of parameters needed to describe the distribution. Each factor uses a number of parameters that is exponential in the number of variables in the factor. This means that we can greatly reduce the cost of representing a distribution if we are able to find a factorization into distributions over fewer variables. We can describe these kinds of factorizations using graphs. Here we use the word “graph” in the sense of graph theory, i.e. a set of vertices that may be connected to each other with edges. When we represent the factorization of a probability distribution with a graph, we call it a structured probabilistic model or graphical model. There are two main kinds of structured probabilistic models: directed and undirected. Both kinds of graphical models use a graph in which each node in the graph corresponds to a random variable, and an edge connecting two random variables means that the probability distribution is able to represent direct interactions between those two random variables. Directed models use graphs with directed edges, and they represent factorizations into conditional probability distributions, as in the example above. Specifically, a directed model contains one factor for every random variable x i in the distribution, and that factor consists of the conditional distribution over x i given the parents of x i : Y p(x) = p (xi | P aG (x i )) . i
See Fig. 3.5 for an example of a directed graph and the factorization of probability distributions it represents. Undirected models use graphs with undirected edges, and they represent factorizations into a set of functions; unlike in the directed case, these functions are usually not probability distributions of any kind. Any set of nodes that are all connected to each other in G is called a clique. Each clique C (i) in an undirected model is associated with a factor φ (i)(C (i) ). These factors are just functions, not 66
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
a
b c
d
e Figure 3.5: A directed graphical model over random variables a, b, c, d, and e. This graph corresponds to probability distributions that can be factored as p(a, b, c, d, e) = p(a)p(b)p(c | a, b)p(d | b)p(e | c). This graph allows us to quickly see some properties of the distribution. For example, a and c interact directly, but a and e interact only indirectly via c.
67
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
probability distributions. The output of each factor must be non-negative, but there is no constraint that the factor must sum or integrate to 1 like a probability distribution. The probability of a configuration of random variables is proportional to the product of all of these factors—assignments that result in larger factor values are more likely. Of course, there is no guarantee that this product will sum to 1. We therefore divide by a normalizing constant Z, defined to be the sum or integral over all states of the product of the φ functions, in order to obtain a normalized probability distribution: 1 Y (i) (i) C p(x) = φ . Z i See Fig. 3.6 for an example of an undirected graph and the factorization of probability distributions it represents. Keep in mind that these graphical representations of factorizations are a language for describing probability distributions. They are not mutually exclusive families of probability distributions. Being directed or undirected is not a property of a probability distribution; it is a property of a particular description of a probability distribution, but any probability distribution may be described in both ways. Throughout part I and part II of this book, we will use structured probabilistic models merely as a language to describe which direct probabilistic relationships different machine learning algorithms choose to represent. No further understanding of structured probabilistic models is needed until the discussion of research topics, in part III, where we will explore structured probabilistic models in much greater detail.
3.15
Example: Naive Bayes
We now know enough probability theory that we can perform some simple applications with a probabilistic model. In this example, we will show how to infer the probability that a patient has the flu using a simple probabilistic model. For now, we will assume that we just know the correct model somehow. Later chapters will cover the concepts needed to learn the model from data. The Naive Bayes model is a simple probabilistic model that is often used to recognize patterns. The model consists of one random variable c representing a category, and a set of random variables F = {f (1) , . . . , f(n)} representing features of objects in each category. In this example, we’ll use Naive Bayes to diagnose patients as having the flu or not. The random variable c can thus have two values: c 0 representing the category of patients who do not have the flu, and c 1 68
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
a
b c
d
e Figure 3.6: An undirected graphical model over random variables a, b, c, d, and e. This graph corresponds to probability distributions that can be factored as p(a, b, c, d, e) = 1 (1) φ (a, b, c)φ(2) (b, d)φ(3) (c, e). This graph allows us to quickly see some properties of Z the distribution. For example, a and c interact directly, but a and e interact only indirectly via c.
69
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
representing the category of patients who do. Suppose f(1) is the random variable (1) representing whether the patient has a sore throat, with f0 representing no sore (1) throat, and f1 representing a sore throat. Suppose f (2) ∈ R is the patient’s temperature in degrees Celsius. When using the Naive Bayes model, we assume that all of the features are independent from each other given the category: Y P (c, f(1) , . . . , f (n)) = P (c) P (f(i) | c). i
See Fig. 3.7 for a directed graphical model that expresses these conditional independence assumptions. These assumptions are very strong and unlikely to be true in naturally occuring situations, hence the name “naive”. Surprisingly, Naive Bayes often produces good predictions in practice (even though the assumptions do not hold precisely), and is a good baseline model to start with when tackling a new problem. Beyond these conditional independence assumptions, the Naive Bayes framework does not specify anything about the probability distribution. The specific choice of distributions is left up to the designer. In our flu example, let’s make P (c) a Bernoulli distribution, with P (c = c 1 ) = φ(c) . We can also make P (f(1) | c) a Bernoulli distribution, with P (f(1) = f 1(1) | c = c) = φ fc . In other words, the Bernoulli parameter changes depending on the value of c. Finally, we need to choose the distribution over f(2). Since f (2) is real-valued, a normal distribution is a good choice. Because f(2) is a temperature, there are hard limits to the values it can take on—it cannot go below 0K, for example. Fortunately, these values are so far from the values measured in human patients that we can safely ignore these hard limits. Values outside the hard limits will receive extremely low probability under the normal distribution so long as the mean and variance are set correctly. As with f(1) , we need to use different parameters for different values of c, to represent that patients with the flu have different temperatures than patients without it: f (2) ∼ N(f(2) | µ c, σ2c ). Now we are ready to determine how likely a patient is to have the flu. To do this, we want to compute P (c | F), but we know P (c) and P(F | c). This suggests that we should use Bayes’ rule to determine the desired distribution. The word “Bayes” in the name “Naive Bayes” comes from this frequent use of Bayes’ rule in conjunction with the model. We begin by applying Bayes’ rule: 70
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
c (1)
f
f
(2)
Figure 3.7: A directed graphical model depicting the conditional independence assumptions used by the Naive Bayes model.
71
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
P (c | F) =
P (c)P (F | c) . P (F)
(3.6)
We do not know P (F). Fortunately, it is easy to compute: X P (F) = P (c = c, F) (by the sum rule) c∈c
=
X
c∈c
P (c = c)P (F | c = c) (by the chain rule).
Substituting this result back into equation 3.6, we obtain P (c)P (F | c) c∈c P (c = c)P (F | c = c)
P (c | F) = P =P
c∈c
P (c)ΠiP (f (i) | c)
P (c = c)Πi P (f (i) | c = c)
by the Naive Bayes assumptions. This is as far as we can simplify the expression for a general Naive Bayes model. We can simplify the expression further by substituting in the definitions of the particular probability distributions we have defined for our flu diagnosis example:
where
P (c = c | f(1) = f1 , f(2) = f2) = P
g(c) 0 c0∈c g(c )
g(c) = P (c = c)P (f(1) = f(1) | c = c)P (f (2) = f
(2)
| c = c).
Since c only has two possible values in our example, we can simplify this to: P (c = 1 | f(1) = f1 , f(2) = f2) = = =
g(1) g(0) + g(1)
1 1+
g(0) g(1)
1 1 + exp (log g(0) − log g(1)) = σ (log g(1) − log g(0)) .
(3.7)
To go further, let’s simplify log g(i): log g(i) = log
1 1−f1 (1 − φ(f) φ (c)i(1 − φ(c) )1−i φ (f)f 1 1 )
72
s
1 exp 2πσ2i
1 (f2 − µi ) 2 2 2σi ! −
CHAPTER 3. PROBABILITY AND INFORMATION THEORY
1 1 1 (f) (f) 2 = i log φ(c) +(1−i) log 1 − φ (c) +f1 log φ i +(1−f 1) log(1−φ i )+ log − 2 (f2 − µi ) . 2 2 2πσ i 2σi Substituting this back into equation 3.7, we obtain
P (c = c | f (1) = f1, f(2) = f2) = σ
(f)
(f)
log φ(c) − log(1 − φ(c) ) + f1 log φ 1 + (1 − f 1) log(1 − φ1 )
(f) −f1 log φ (f) 0 + (1 − f1 ) log(1 − φ 0 ) 1 1 1 1 2 2 2 2 − log 2πσ1 + log 2πσ 0 − 2 (f2 − µ1) + 2 (f 2 − µ 0 ) . 2 2 2σ1 2σ0
From this formula, we can read off various intuitive properties of the Naive Bayes classifier’s behavior on this example problem, regarding the inference that can be drawn from a trained model. The probability of the patient having the flu grows like a sigmoidal curve. We move farther to the left as f2, the patient’s temperature, moves farther away from µ1, the average temperature of a flu patient.
73
Chapter 4
Numerical Computation Machine learning algorithms usually require a high amount of numerical computation. This typically refers to algorithms that solve mathematical problems by methods that iteratively update estimates of the solution, rather than analytically deriving a formula providing a symbolic expression for the correct solution. Common operations include solving systems of linear equations and finding the value of an argument that minimizes a function. Even just evaluating a mathematical function on a digital computer can be difficult when the function involves real numbers, which cannot be represented precisely using a finite amount of memory.
4.1
Overflow and Underflow
The fundamental difficulty in performing continuous math on a digital computer is that we need to represent infinitely many real numbers with a finite number of bit patterns. This means that for almost all real numbers, we incur some approximation error when we represent the number in the computer. In many cases, this is just rounding error. Rounding error is problematic, especially when it compounds across many operations, and can cause algorithms that work in theory to fail in practice if they are not designed to minimize the accumulation of rounding error. One form of rounding error that is particularly devastating is underflow. Underflow occurs when numbers near zero are rounded to zero. Many functions behave qualitatively differently when their argument is zero rather than a small positive number. For example, we usually want to avoid division by zero (some software environments will raise exceptions when this occurs, otherwise will return a result with a placeholder not-a-number value) or taking the logarithm of zero (this is usually treated as −∞, which then becomes not-a-number if it is used for further arithmetic). 74
CHAPTER 4. NUMERICAL COMPUTATION
Another highly damaging form of numerical error is overflow. Overflow occurs when numbers with large magnitude are approximated as ∞ or −∞. Further arithmetic will usually change this infinite values into not-a-number values. For an example of the need to design software implementations to deal with overflow and underflow, consider the softmax function, typically used to predict the probabilities associated with a Multinoulli distribution: exp(xi ) softmax(x)i = Pn . exp(x ) j j
Consider what happens when all of the xi are equal to some constant c. Analytically, we can see that all of the outputs should be equal to 1n . Numerically, this may not occur when c has large magnitude. If c is very negative, then exp(c) will underflow. This means the denominator of the softmax will become 0, so the final result is undefined. When c is very large and positive, exp(c) will overflow, again resulting in the expression as a whole being undefined. Both of these difficulties can be resolved by instead evaluating softmax(z) where z = x − maxi xi . Simple algebra shows that the value of the softmax function is not changed analytically by adding or subtracting a scalar from the input vector. Subtracting maxi xi results in the largest argument to exp being 0, which rules out the possibility of overflow. Likewise, at least one term in the denominator has a value of 1, which rules out the possibility of underflow in the denominator leading to a division by zero. There is still one small problem. Underflow in the numerator can still cause the expression as a whole to evaluate to zero. This means that if we implement log softmax(x) by first running the softmax subroutine then passing the result to the log function, we could erroneously obtain −∞. Instead, we must implement a separate function that calculates log softmax in a numerically stable way. The log softmax function can be stabilized using the same trick as we used to stabilize the softmax function. For the most part, we do not explicitly detail all of the numerical considerations involved in implementing the various algorithms described in this book. Implementors should keep numerical issues in mind when developing implementations. Many numerical issues can be avoided by using Theano (Bergstra et al., 2010a; Bastien et al., 2012), a software package that automatically detects and stabilizes many common numerically unstable expressions that arise in the context of deep learning.
4.2
Poor Conditioning
Conditioning refers to how rapidly a function changes with respect to small changes in its inputs. Functions that change rapidly when their inputs are per75
CHAPTER 4. NUMERICAL COMPUTATION
turbed slightly can be problematic for scientific computation because rounding errors in the inputs can result in large changes in the output. Consider the function f (x) = A −1x. When A ∈ Rn×n has an eigenvalue decomposition, its condition number is max | i,j
λi |, λj
i.e. the ratio of the magnitude of the largest and smallest eigenvalue. When this number is large, matrix inversion is particularly sensitive to error in the input. Note that this is an intrinsic property of the matrix itself, not the result of rounding error during matrix inversion. Poorly conditioned matrices amplify pre-existing errors when we multiply by the true matrix inverse. In practice, the error will be compounded further by numerical errors in the inversion process itself. With iterative algorithms such as solving a linear system (or the worked-out example of linear least square by gradient descent, Section 4.5) ill-conditioning (in that case of the linear system matrix) yields very slow convergence of the iterative algorithm, i.e., more iterations are needed to achieve some given degree of approximation to the final solution.
4.3
Most deep learning algorithms involve optimization of some sort. Optimization refers to the task of either minimizing or maximizing some function f (x) by altering x. We usually phrase most optimization problems in terms of minimizing f (x). Maximization may be accomplished via a minimization algorithm by minimizing −f (x). The function we want to minimize or maximize is called the objective function. When we are minimizing it, we may also call it the cost function, loss function, or error function. We often denote the value that minimizes or maximizes a function with a superscript ∗. For example, we might say x ∗ = arg min f (x). We assume the reader is already familiar with calculus, but provide a brief review of how calculus concepts relate to optimization here. Suppose we have a function y = f (x), where both x and y are real numbers. The derivative of this function is denoted as f 0 (x) or as dy . The derivative f 0(x) dx gives the slope of f (x) at the point x. In other words, it specifies how to scale a small change in the input in order to obtain the corresponding change in the output: f (x + ) ≈ f (x) + f 0 (x). The derivative is therefore useful for minimizing a function because it tells us how to change x in order to make a small improvement in y. For example, we 76
CHAPTER 4. NUMERICAL COMPUTATION
Figure 4.1: An illustration of how the derivatives of a function can be used to follow the function downhill to a minimum. This technique is called gradient descent.
know that f (x − sign(f 0 (x))) is less than f (x) for small enough . We can thus reduce f (x) by moving x in small steps with opposite sign of the the derivative. This technique is called gradient descent (Cauchy, 1847a). See Fig. 4.1 for an example of this technique. When f 0 (x) = 0, the derivative provides no information about which direction to move. Points where f 0 (x) = 0 are known as critical points or stationary points. A local minimum is a point where f (x) is lower than at all neighboring points, so it is no longer possible to decrease f (x) by making infinitesimal steps. A local maximum is a point where f (x) is higher than at all neighboring points, so it is not possible to increase f (x) by making infinitesimal steps. Some critical points are neither maxima nor minima. These are known as sadd le points. See Fig. 4.2 for examples of each type of critical point. A point that obtains the absolute lowest value of f (x) is a global minimum. It is possible for there to be only one global minimum or multiple global minima of the function. It is also possible for there to be local minima that are not globally 77
CHAPTER 4. NUMERICAL COMPUTATION
Figure 4.2: Examples of each of the three types of critical points in 1-D. A critical point is a point with zero slope. Such a point can either be a local minimum, which is lower than the neighboring points, a local maximum, which is higher than the neighboring points, or a saddle point, which has neighbors that are both higher and lower than the point itself. The situation in higher dimension is qualitatively different, especially for saddle points: see Figures 4.4 and 4.5.
78
CHAPTER 4. NUMERICAL COMPUTATION
Figure 4.3: Optimization algorithms may fail to find a global minimum when there are multiple local minima or plateaus present. In the context of deep learning, we generally accept such solutions even though they are not truly minimal, so long as they correspond to significantly low values of the cost function. Optimizing MLPs was believed to suffer from the presence of many local minima, but this idea is questioned in recent work (Dauphin et al., 2014), with saddle points being considered as the more serious issue.
optimal. In the context of deep learning, we optimize functions that may have many local minima that are not optimal, and many saddle points surrounded by very flat regions. All of this makes optimization very difficult, especially when the input to the function is multidimensional. We therefore usually settle for finding a value of f that is very low, but not necessarily minimal in any formal sense. See Fig. 4.3 for an example. The figure caption also raises the question of whether local minima or saddle points and plateaus are more to blame for the difficulties one may encounter in training deep networks, a question that is discussed further in Chapter 8, in particular Section 8.2.3. We often minimize functions that have multiple inputs: f : Rn → R. Note that for the concept of “minimization” to make sense, there must still be only 79
CHAPTER 4. NUMERICAL COMPUTATION
one output. For these functions, we must make use of the concept of partial derivatives. The partial derivative ∂x∂ i f (x) measures how f changes as only the variable xi increases at point x. The gradient generalizes the notion of derivative to the case where the derivative is with respect to a vector: f is the vector containing all of the partial derivatives, denoted ∇x f (x). Element i of the gradient is the partial derivative of f with respect to x i. In multiple dimensions, critical points are points where every element of the gradient is equal to zero. The directional derivative in direction u (a unit vector) is the slope of the function f in direction u. In other words, the derivative of the function f (x + αu) with respect to α, evaluated at α = 0. Using the chain rule, we can see that this is u> ∇x f (x). To minimize f , we would like to find the direction in which f decreases the fastest. We can do this using the directional derivative: min >
u,u u=1
=
min
u,u >u=1
u> ∇x f (x)
||u||2||∇ x f (x)||2 cos θ
where θ is the angle between u and the gradient. Substituting in ||u||2 = 1 and ignoring factors that don’t depend on u, this simplifies to minu cos θ. This is minimized when u points in the opposite direction as the gradient. In other words, the gradient points directly uphill, and the negative gradient points directly downhill. We can decrease f by moving in the direction of the negative gradient. This is known as the method of steepest descent or gradient descent. Steepest descent proposes a new point x0 = x − ∇ x f (x) where is the size of the step. We can choose in several different ways. A popular approach is to set to a small constant. Sometimes, we can solve for the step size that makes the directional derivative vanish. Another approach is to evaluate f (x − ∇x f (x)) for several values of and choose the one that results in the smallest objective function value. This last strategy is called a line search. Steepest descent converges when every element of the gradient is zero (or, in practice, very close to zero). In some cases, we may be able to avoid running this iterative algorithm, and just jump directly to the critical point by solving the equation ∇ xf (x) = 0 for x. Sometimes we need to find all of the partial derivatives of all of the elements of a vector-valued function. The matrix containing all such partial derivatives is known as a Jacobian matrix. Specifically, if we have a function f : R m → Rn , then the Jacobian matrix J ∈ Rn×m of f is defined such that Ji,j = ∂x∂j f (x)i. 80
CHAPTER 4. NUMERICAL COMPUTATION
We are also sometimes interested in a derivative of a derivative. This is known as a second derivative. For example, for a function f : R n → R, the derivative 2 with respect to xi of the derivative of f with respect to xj is denoted as ∂x∂i ∂xj f . 2
d 00 In a single dimension, we can denote dx 2 f by f (x). The second derivative tells us how the first derivative will change as we vary the input. This means it can be useful for determining whether a critical point is a local maximum, a local minimum, or saddle point. Recall that on a critical point, f 0 (x) = 0. When f 00 (x) > 0, this means that f 0 (x) increases as we move to the right, and f 0 (x) decreases as we move to the left. This means f0(x − ) < 0 and f 0 (x + ) > 0 for small enough . In other words, as we move right, the slope begins to point uphill to the right, and as we move left, the slope begins to point uphill to the left. Thus, when f 0 (x) = 0 and f00 (x) > 0, we can conclude that x is a local minimum. Similarly, when f 0 (x) = 0 and f 00 (x) < 0, we can conclude that x is a local maximum. This is known as the second derivative test. Unfortunately, when f 00 (x) = 0, the test is inconclusive. In this case x may be a saddle point, or a part of a flat region. In multiple dimensions, we need to examine all of the second derivatives of the function. These derivatives can be collected together into a matrix called the Hessian matrix. The Hessian matrix H (f )(x) is defined such that
H (f )(x)i,j =
∂2 f (x). ∂xi∂x j
Equivalently, the Hessian is the Jacobian of the gradient. Anywhere that the second partial derivatives are continuous, the differential operators are commutative, i.e. their order can be swapped: ∂2 ∂2 f (x) = f (x). ∂xi ∂xj ∂xj ∂xi This implies that H i,j = H j,i, so the Hessian matrix is symmetric at such points. Most of the functions we encounter in the context of deep learning have a symmetric Hessian almost everywhere. Because the Hessian matrix is real and symmetric, we can decompose it into a set of real eigenvalues and an orthogonal basis of eigenvectors. Using the eigendecomposition of the Hessian matrix, we can generalize the second derivative test to multiple dimensions. At a critical point, where ∇ xf (x) = 0, we can examine the eigenvalues of the Hessian to determine whether the critical point is a local maximum, local minimum, or saddle point. When the Hessian is positive definite 1, the point is a local minimum. This can be seen by observing 1
all its eigenvalues are positive 81
CHAPTER 4. NUMERICAL COMPUTATION
all its eigenvalues are negative 82
CHAPTER 4. NUMERICAL COMPUTATION
Figure 4.4: A saddle point containing both positive and negative curvature. The function in this example is f (x) = x21 − x22 . Along the axis corresponding to x1 , the function curves upward. This axis is an eigenvector of the Hessian and has a positive eigenvalue. Along the axis corresponding to x2 , the function curves downward. This direction is an eigenvector of the Hessian with negative eigenvalue. The name “saddle point” derives from the saddle-like shape of this function. This is the quintessential example of a function with a saddle point. Note that in more than one dimension, it is not necessary to have an eigenvalue of 0 in order to get a saddle point: it is only necessary to have both positive and negative eigenvalues. See Section 8.2.3 for a longer discussion of saddle points in deep nets.
83
CHAPTER 4. NUMERICAL COMPUTATION
Figure 4.5: Gradient descent fails to exploit the curvature information contained in the Hessian matrix. Here we use gradient descent on a quadratic function whose Hessian matrix has condition number 5 (curvature is 5 times larger in one direction than in some other direction). The lines above the mesh indicate the path followed by gradient descent. This very elongated quadratic function resembles a long canyon. Gradient descent wastes time repeatedly descending canyon walls, because they are the steepest feature. Because the step size is somewhat too large, it has a tendency to overshoot the bottom of the function and thus needs to descend the opposite canyon wall on the next iteration. The large positive eigenvalue of the Hessian corresponding to the eigenvector pointed in this direction indicates that this directional derivative is rapidly increasing, so an optimization algorithm based on the Hessian could predict that the steepest direction is not actually a promising search direction in this context. Note that some recent results suggest that the above picture is not representative for deep highly non-linear networks. See Section 8.2.4 for more on this subject.
84
CHAPTER 4. NUMERICAL COMPUTATION
Newton’s method that also use the Hessian matrix are called second-order optimization algorithms (Nocedal and Wright, 2006). The optimization algorithms employed in most contexts in this book are applicable to a wide variety of functions, but come with almost no guarantees. This is because the family of functions used in deep learning is quite complicated. In many other fields, the dominant approach to optimization is to design optimization algorithms for a limited family of functions. Perhaps the most successful field of specialized optimization is convex optimization. Convex optimization algorithms are able to provide many more guarantees, but are applicable only to functions for which the Hessian is positive definite everywhere. Such functions are well-behaved because they lack saddle points and all of their local minima are necessarily global minima. However, most problems in deep learning are difficult to express in terms of convex optimization. Convex optimization is used only as a subroutine of some deep learning algorithms. Ideas from the analysis of convex optimization algorithms can be useful for proving the convergence of deep learning algorithms. However, in general, the importance of convex optimization is greatly diminished in the context of deep learning. For more information about convex optimization, see Boyd and Vandenberghe (2004) or Rockafellar (1997).
4.4
Constrained Optimization
Sometimes we wish not only to maximize or minimize a function f (x) over all possible values of x. Instead we may wish to find the maximal or minimal value of f (x) for values of x in some set S. This is known as constrained optimization. Points x that lie within the set S are called feasible points in constrained optimization terminology. One simple approach to constrained optimization is simply to modify gradient descent taking the constraint into account. If we use a small constant step size , we can make gradient descent steps, then project the result back into S. If we use a line search (see previous section), we can search only over step sizes that yield new x points that are feasible, or we can project each point on the line back into the constraint region. When possible, this method can be made more efficient by projecting the gradient into the tangent space of the feasible region before taking the step or beginning the line search (Rosen, 1960). A more sophisticated approach is to design a different, unconstrained optimization problem whose solution can be converted into a solution to the original, constrained optimization problem. For example, if we want to minimize f (x) for x ∈ R2 with x constrained to have exactly unit L2 norm, we can instead minimize g(θ) = f ([cos θ, sin θ]T ) with respect to θ, then return [cos θ, sin θ] as the solution to the original problem. This approach requires creativity; the transformation 85
CHAPTER 4. NUMERICAL COMPUTATION
between optimization problems must be designed specifically for each case we encounter. The Karush–Kuhn–Tucker (KKT) approach 3 provides a very general solution to constrained optimization. With the KKT approach, we introduce a new function called the generalized Lagrangian or generalized Lagrange function. To define the Lagrangian, we first need to describe S in terms of equations and inequalities. We want a description of S in terms of m functions gi and n functions h j so that S = {x | ∀i, g i(x) = 0 and ∀j, h j(x) ≤ 0}. The equations involving gi are called the equality constraints and the inequalities involving h j are called inequality constraints. We introduce new variables λi and α j for each constraint, these are called the KKT multipliers. The generalized Lagrangian is then defined as X X L(x, λ, α) = f (x) + λ i gi (x) + α jh j (x). i
j
We can now solve a constrained minimization problem using unconstrained optimization of the generalized Lagrangian. Observe that, so long as at least one feasible point exists and f (x) is not permitted to have value ∞, then min max max L(x, λ, α). x
λ
α,α≥0
has the same optimal objective function value and set of optimal points x as min f (x). x∈S
This follows because any time the constraints are satisfied, max max L(x, λ, α) = f (x), λ
α,α≥0
while any time a constraint is violated, max max L(x, λ, α) = ∞. λ α,α≥0
These properties guarantee that no infeasible point will ever be optimal, and that the optimum within the feasible points is unchanged. To perform constrained maximization, we can construct the generalized Lagrange function of −f (x), which leads to this optimization problem: X X min max max −f (x) + λi g i(x) + αj h j(x). x
λ
α,α≥0
i
3
j
The KKT approach generalizes the method of Lagrange multipliers which only allows equality constraints 86
CHAPTER 4. NUMERICAL COMPUTATION
We may also convert this to a problem with maximization in the outer loop: X X max min min f (x) + λ igi (x) − αj h j (x). x
λ
α,α≥0
i
j
Note that the sign of the term for the equality constraints does not matter; we may define it with addition or subtraction as we wish, because the optimization is free to choose any sign for each λi . The inequality constraints are particularly interesting. We say that a constraint h i (x) is active if h i(x∗ ) = 0. If a constraint is not active, then the solution to the problem is the same whether or not that constraint exists. Because an inactive h i has negative value, then the solution to min x max λ maxα,α≥0 L(x, λ, α) will have αi = 0. We can thus observe that at the solution, αh(x) = 0. In other words, for all i, we know that at least one of the constraints αi ≥ 0 and hi (x) ≤ 0 must be active at the solution. To gain some intuition for this idea, we can say that either the solution is on the boundary imposed by the inequality and we must use its KKT multiplier to influence the solution to x, or the inequality has no influence on the solution and we represent this by zeroing out its KKT multiplier. The properties that the gradient of the generalized Lagrangian is zero, all constraints on both x and the KKT multipliers are satisfied, and α h(x) = 0 are called the Karush-Kuhn-Tucker (KKT) conditions (Karush, 1939; Kuhn and Tucker, 1951). Together, these properties describe the optimal points of constrained optimization problems. In the case where there are no inequality constraints, the KKT approach simplifies to the method of Lagrange multipliers. For more information about the KKT approach, see Nocedal and Wright (2006).
4.5
Example: Linear Least Squares
Suppose we want to find the value of x that minimizes 1 ||Ax − b|| 22. 2 There are specialized linear algebra algorithms that can solve this problem efficiently. However, we can also explore how to solve it using gradient-based optimization as a simple example of how these techniques work. First, we need to obtain the gradient: f (x) =
∇x f (x) = A >(Ax − b) = A >Ax − A > b. We can then follow this gradient downhill, taking small steps. See Algorithm 4.1 for details. 87
CHAPTER 4. NUMERICAL COMPUTATION
Algorithm 4.1 An algorithm to minimize f (x) = 12 ||Ax − b|| 22 with respect to x using gradient descent. Set , the step size, and δ, the tolerance, to small, positive numbers. > while ||A>Ax >− A b|| 2 >> δ do x ← x − A Ax − A b end while One can also solve this problem using Newton’s method. In this case, because the true function is quadratic, the quadratic approximation employed by Newton’s method is exact, and the algorithm converges to the global minimum in a single step. Now suppose we wish to minimize the same function, but subject to the constraint x >x ≤ 1. To do so, we introduce the Lagrangian > L(x, λ) = f (x) + λ x x − 1 . We can now solve the problem
min max L(x, λ). x
λ,λ≥0
The solution to the unconstrained least squares problem is given by x = A +b. If this point is feasible, then it is the solution to the constrained problem. Otherwise, we must find a solution where the constraint is active. By differentiating the Lagrangian with respect to x, we obtain the equation A> Ax − A> b + 2λx = 0. This tells us that the solution will take the form x = (A>A + 2λI) −1 b. The magnitude of λ must be chosen such that the result obeys the constraint. We can find this value by performing gradient ascent on λ. To do so, observe ∂ L(x, λ) = x> x − 1. ∂λ When the norm of x exceeds 1, this derivative is positive, so to ascend the gradient and increase the Lagrangian with respect to λ, we increase λ. This will in turn shrink the optimal x. The process continues until x has the correct norm and the derivative on λ is 0. 88
Chapter 5
Machine Learning Basics Deep learning is a specific kind of machine learning. In order to understand deep learning well, one must have a solid understanding of the basic principles of machine learning. This chapter provides a brief course in the most important general principles that will be applied throughout the rest of the book. Novice readers or those that want a wider perspective are encouraged to consider machine learning textbooks with a more comprehensive coverage of the fundamentals, such as Murphy (2012) or Bishop (2006). If you are already familiar with machine learning basics, feel free to skip ahead to Section 5.13. That section covers some perspectives on traditional machine learning techniques that have strongly influenced the development of deep learning algorithms.
5.1
Learning Algorithms
A machine learning algorithm is an algorithm that is able to learn from data. But what do we mean by learning? A popular definition of learning in the context of computer programs is “A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P , if its performance at tasks in T , as measured by P , improves with experience E” (Mitchell, 1997). One can imagine a very wide variety of experiences E, tasks T , and performance measures P , and we do not make any attempt in this book to provide a formal definition of what may be used for each of these entities. Instead, the following sections provide intuitive descriptions and examples of the different kinds of tasks, performance measures, and experiences that can be used to construct machine learning algorithms.
89
CHAPTER 5. MACHINE LEARNING BASICS
5.1.1
CHAPTER 5. MACHINE LEARNING BASICS
CHAPTER 5. MACHINE LEARNING BASICS
CHAPTER 5. MACHINE LEARNING BASICS
5.1.2
The Performance Measure, P
In order to evaluate the abilities of a machine learning algorithm, we must design a quantitative measure of its performance. Usually this performance measure P is specific to the task T being carried out by the system. 93
CHAPTER 5. MACHINE LEARNING BASICS
For tasks such as classification, classification with missing inputs, and transcription, we often measure the accuracy of the model. In the simplest case this is just the proportion of examples for which the model produces the correct output. For tasks such as density estimation, we can measure the probability the model assigns to some examples. Usually we are interested in how well the machine learning algorithm performs on data that it has not seen before, since this determines how well it will work when deployed in the real world. We therefore evaluate these performance measures using a test set of data that is separate from the data used for training the machine learning system. The choice of performance measure may seem straightforward and objective, but it is often difficult to choose a performance measure that corresponds well to the desired behavior of the system. In some cases, this is because it is difficult to decide what should be measured. For example, when performing a transcription task, should we measure the accuracy of the system at transcribing entire sequences, or should we use a more fine-grained performance measure that gives partial credit for getting some elements of the sequence correct? When performing a regression task, should we penalize the system more if it frequently makes medium-sized mistakes or if it rarely makes very large mistakes? These kinds of design choices depend on the application. In other cases, we know what quantity we would ideally like to measure, but measuring it is impractical. For example, this arises frequently in the context of density estimation. Many of the best probabilistic models represent probability distributions only implicitly. Computing the actual probability value assigned to a specific point in space is intractable. In these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion.
5.1.3
The Experience, E
Machine learning algorithms can be broadly categorized as unsupervised or supervised by what kind of experience they are allowed to have during the learning process. Most of the learning algorithms in this book can be understood as being allowed to experience an entire dataset. A dataset is a collection of many objects called examples, with each example containing many features that have been objectively measured. Sometimes we will also call examples data points. One of the oldest datasets studied by statisticians and machine learning researchers is the Iris dataset (Fisher, 1936). It is a collection of measurements of different parts of 150 iris plants. Each individual plant corresponds to one exam94
CHAPTER 5. MACHINE LEARNING BASICS
ple. The features within each example are the measurements of each of the parts of the plant: the sepal length, sepal width, petal length, and petal width. The dataset also records which species each plant belonged to. Three different species are represented in the dataset. Unsupervised learning algorithms experience a dataset containing many features, then learn useful properties of the structure of this dataset. In the context of deep learning, we usually want to learn the entire probability distribution that generated a dataset, whether explicitly as in density estimation or implicitly for tasks like synthesis or denoising. Some other unsupervised learning algorithms perform other roles, like dividing the dataset into clusters of similar examples. Supervised learning algorithms experience a dataset containing features, but each example is also associated with a label or target. For example, the Iris dataset is annotated with the species of each iris plant. A supervised learning algorithm can study the Iris dataset and learn to classify iris plants into three different species based on their measurements. Roughly speaking, unsupervised learning involves observing several examples of a random vector x, and attempting to implicitly or explicitly learn the probability distribution p(x), or some interesting properties of that distribution, while supervised learning involves observing several examples of a random vector x and an associated value or vector y, and learning to predict y from x, e.g. estimating p(y | x). The term supervised learning originates from the view of the target y being provided by an instructor or teacher that shows the machine learning system what to do. In unsupervised learning, there is no instructor or teacher, and the algorithm must learn to make sense of the data without this guide. Unsupervised learning and supervised learning are not formally defined terms, and the lines between them are often blurred. Many machine learning technologies can be used to perform both tasks. For example, the chain rule of probability states that for a vector x ∈ Rn , the joint distribution can be decomposed as p(x) = Πni=1 p(x i | x 1, . . . , x i−1). This decomposition means that we can solve the ostensibly unsupervised problem of modeling p(x) by splitting it into n supervised learning problems. Alternatively, we can solve the supervised learning problem of learning p(y | x) by using traditional unsupervised learning technologies to learn the joint distribution p(x, y) and inferring p(x, y) . p(y | x) = P 0 y0 p(x, y )
Though unsupervised learning and supervised learning are not completely formal or distinct concepts, they do help to roughly categorize some of the things we do with machine learning algorithms. Traditionally, people refer to regression, 95
CHAPTER 5. MACHINE LEARNING BASICS
classification, and structured output problems as supervised learning. Density estimation in support of other tasks is usually considered unsupervised learning. Some machine learning algorithms do not just experience a fixed dataset. For example, reinforcement learning algorithms interact with an environment, so there is a feedback loop between the learning system and its experiences. Such algorithms are beyond the scope of this book. Most machine learning algorithms simply experience a dataset. A dataset can be described in many ways. In all cases, a dataset is a collection of examples. Each example is a collection of observations called features collected from a different time or place. If we wish to make a system for recognizing objects from photographs, we might use a machine learning algorithm where each example is a photograph, and the features within the example are the brightness values of each of the pixels within the photograph. If we wish to perform speech recognition, we might collect a dataset where each example is a recording of a person saying a word or sentence, and each of the features is the amplitude of the sound wave at a particular moment in time. One common way of describing a dataset is with a design matrix. A design matrix is a matrix containing a different example in each row. Each column of the matrix corresponds to a different feature. For instance, the Iris dataset contains 150 examples with four features for each example. This means we can represent the dataset with a design matrix X ∈ R150×4 , where Xi,1 is the sepal length of plant i, Xi,2 is the sepal width of plant i, etc. We will describe most of the learning algorithms in this book in terms of how they operate on design matrix datasets. Of course, to describe a dataset as a design matrix, it must be possible to describe each example as a vector, and each of these vectors must be the same size. This is not always possible. For example, if you have a collection of photographs with different widths and heights, then different photographs will contain different numbers of pixels, so not all of the photographs may be described with the same length of vector. Different sections of this book describe how to handle different types of heterogeneous data. In cases like these, rather than describing the dataset as a matrix with m rows, we will describe it as a set containing m elements, e.g. {x(1) , x(2) , . . . , x(m) }. This notation does not imply that any two example vectors x(i) and x(j) have the same size. In the case of supervised learning, the example contains a label or target as well as a collection of features. For example, if we want to use a learning algorithm to perform object recognition from photographs, we need to specify which object appears in each of the photos. We might do this with a numeric code, with 0 signifying a person, 1 signifying a car, 2 signifying a cat, etc. Often when working with a dataset containing a design matrix of feature observations X, we also 96
CHAPTER 5. MACHINE LEARNING BASICS
provide a vector of labels y, with yi providing the label for example i. Of course, sometimes the label may be more than just a single number. For example, if we want to train a speech recognition system to transcribe entire sentences, then the label for each example sentence is a sequence of words. Just as there is no formal definition of supervised and unsupervised learning, there is no rigid taxonomy of datasets or experiences. The structures described here cover most cases, but it is always possible to design new ones for new applications.
5.2
Example: Linear Regression
In the previous section, we saw that a machine learning algorithm is an algorithm that is capable of improving a computer program’s performance at some task via experience. Now it is time to define some specific machine learning algorithms. Let’s begin with an example of a simple machine learning algorithm: linear regression. In this section, we will only describe what the linear regression algorithm does. We wait until later sections of this chapter to justify the algorithm and show more formally that it actually works. As the name implies, linear regression solves a regression problem. In other words, the goal is to build a system that can take a vector x ∈ Rn as input and predict the value of a scalar y ∈ R as its output. In the case of linear regression, the output is a linear function of the input. Let yˆ be the value that our model predicts y should take on. We define the output to be yˆ = w>x where w ∈ Rn is a vector of parameters. Parameters are values that control the behavior of the system. In this case, w i is the coefficient that we multiply by feature x i before summing up the contributions from all the features. We can think of w as a set of weights that determine how each feature affects the prediction. If a feature xi receives a positive weight wi , then increasing the value of that feature increases the value of our prediction yˆ. If a feature receives a negative weight, then increasing the value of that feature decreases the value of our prediction. If a feature’s weight is large in magnitude, then it has a large effect on the prediction. If a feature’s weight is zero, it has no effect on the prediction. We thus have a definition of our task T : to predict y from x by outputting yˆ = w > x. Next we need a definition of our performance measure, P . Let’s suppose that we have a design matrix of m example inputs that we will not use for training, only for evaluating how well the model performs. We also have a vector of regression targets providing the correct value of y for each of 97
CHAPTER 5. MACHINE LEARNING BASICS
these examples. Because this dataset will only be used for evaluation, we call it the test set. Let’s refer to the design matrix of inputs as X (test) and the vector of regression targets as y (test) . One way of measuring the performance of the model is to compute the mean squared error of the model on the test set. If yˆ(test) is the predictions of the model on the test set, then the mean squared error is given by 1 X (test) MSEtest = (yˆ − y(test) )2i . m i Intuitively, one can see that this error measure decreases to 0 when yˆ(test) = y(test) . We can also see that 1 (test) MSE test = ||ˆ y − y(test) ||22, m so the error increases whenever the Euclidean distance between the predictions and the targets increases. To make a machine learning algorithm, we need to design an algorithm that will improve the weights w in a way that reduces MSE test when the algorithm is allowed to gain experience by observing a training set (X(train), y (train) ). One intuitive way of doing this (which we will justify later) is just to minimize the mean squared error on the training set, MSEtrain . To minimize MSE train, we can simply solve for where its gradient is 0: ∇wMSEtrain = 0 1 ⇒ ∇w ||yˆ(train) − y(train)||22 = 0 m 1 ⇒ ∇w||X (train) w − y(train)|| 22 = 0 m ⇒ ∇w (X (train) w − y(train) )>(X (train) w − y (train)) = 0
⇒ ∇ w(w> X (train)>X (train) w − 2w>X (train)>y (train) + y (train)>y (train) ) = 0 ⇒ 2X (train)> X (train)w − 2X(train)> y (train) = 0 ⇒ w = (X (train)> X (train)) −1 X (train)> y(train)
(5.1)
The system of equations defined by Eq. 5.1 is known as the normal equations. Solving these equations constitutes a simple learning algorithm. For an example of the linear regression learning algorithm in action, see Fig. 5.1. It’s worth noting that the term linear regression is often used to refer to a slightly more sophisticated model with one additional parameter—an intercept term b. In this model yˆ = w>x + b 98
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.1: Consider this example linear regression problem, with a training set consisting of 5 data points, each containing one feature. This means that the weight vector w contains only a single parameter to learn, w1 . (Left) Observe that linear regression learns to set w 1 such that the line y = w1x comes as close as possible to passing through all the training points. (Right) The plotted point indicates the value of w 1 found by the normal equations, which we can see minimizes the mean squared error on the training set.
so the mapping from parameters to predictions is still a linear function but the mapping from features to predictions is now an affine function. This extension to affine functions means that the plot of the model’s predictions still looks like a line, but it need not pass through the origin. We will frequently use the term “linear” when referring to affine functions throughout this book. Linear regression is of course an extremely simple and limited learning algorithm, but it provides an example of how a learning algorithm can work. In the subsequent sections we will describe some of the basic principles underlying learning algorithm design and demonstrate how these principles can be used to build more complicated learning algorithms.
5.3
Generalization, Capacity, Overfitting and Underfitting
The central challenge in machine learning is that we must perform well on new, previously unseen inputs—not just those on which our model was trained. The ability to perform well on previously unobserved inputs is called generalization. Typically, when training a machine learning model, we have access to a training set, we can compute some error measure on the training set called the training error, and we reduce this training error. So far, what we have described is simply an optimization problem. What separates machine learning from optimization is 99
CHAPTER 5. MACHINE LEARNING BASICS
that we want the generalization error to be low as well. The generalization error is defined as the expected value of the error on a new input. Here the expectation is taken across different possible inputs, drawn from the distribution of inputs we expect the system to encounter in practice. We typically estimate the generalization error of a machine learning model by measuring its performance on a test set of examples that were collected separate from the training set. In our linear regression example, we trained the model by minimizing the training error, 1 ||X(train) w − y (train)|| 22 , (train) m 1 ||X(test) w − y (test) ||22 . but we actually care about the test error, m(test) How can we affect performance on the test set when we only get to observe the training set? The field of statistical learning theory provides some answers. If the training and the test set are collected arbitrarily, there is indeed little we can do. If we are allowed to make some assumptions about how the training and test set are collected, then we can make some progress. We typically make a set of assumptions known collectively as the i.i.d. assumptions. These assumptions are that the examples in each dataset are independent from each other, and that the train set and test set are identically distributed, drawn from the same probability distribution as each other. We call that shared underlying distribution the data generating distribution, or data generating process (which is particularly relevant if the examples are not independent). This probabilistic framework allows us to mathematically study the relationship between training error and test error. One immediate connection we can observe between the training and test error is that for a randomly selected model, the two have the same expected value. Suppose we have a probability distribution p(x, y) and we sample from it repeatedly to generate the train set and the test set. For some fixed value w, then the expected training set error under this sampling process is exactly the same as the expected test set error under this sampling process. The only difference between the two conditions is the name we assign to the dataset we sample. From this observation, we can see that it is natural for there to be some relationship between training and test error under these assumptions. Of course, when we use a machine learning algorithm, we do not fix the parameters ahead of time, then sample both datasets. We sample the training set, then use it to choose the parameters to reduce training set error, then sample the test set. Under this process, the expected test error is greater than or equal to the expected value of training error. The factors determining how well a machine learning algorithm will perform are its ability to:
100
CHAPTER 5. MACHINE LEARNING BASICS
1. Make the training error small. 2. Make the gap between training and test error small. These two factors correspond to the two central challenges in machine learning: underfitting and overfitting. Underfitting occurs when the model is not able to obtain a sufficiently low error value on the training set. Overfitting occurs when the gap between the training error and test error is too large. We can control whether a model is more likely to overfit or underfit by altering its capacity. Informally, a model’s capacity is its ability to fit a wide variety of functions. Models with low capacity may struggle to fit the training set. Models with high capacity can overfit, i.e., memorize properties of the training set that do not serve them well on the test set. One way to control the capacity of a learning algorithm is by choosing its hypothesis space, the set of functions that the learning algorithm is allowed to choose as being the solution. For example, the linear regression algorithm has the set of all linear functions of its input as its hypothesis space. We can generalize linear regression to include polynomials, rather than just linear functions, in its hypothesis space. Doing so increases the model’s capacity. A polynomial of degree one gives us the linear regression model with which we are already familiar, with prediction yˆ = b + wx. By introducing x 2 as another feature provided to the linear regression model, we can learn a model that is quadratic as a function of x: yˆ = b + w 1x + w2 x2 . Note that this is still a linear function of the parameters, so we can still use the normal equations to train the model in closed form. We can continue to add more powers of x as additional features, for example to obtain a polynomial of degree 9: 9 X yˆ = b + w ixi . i=1
Machine learning algorithms will generally perform best when their capacity is appropriate in regard to the true complexity of the task they need to perform and the amount of training data they are provided with. Models with insufficient capacity are unable to solve complex tasks. Model with high capacity can solve complex tasks, but when their capacity is higher than needed to solve the present task they may overfit. 101
CHAPTER 5. MACHINE LEARNING BASICS
Fig. 5.2 shows this principle in action. We compare a linear, quadratic, and degree-9 predictor attempting to fit a problem where the true underlying function is quadratic. The linear function is unable to capture the curvature in the true underlying problem, so it underfits. The degree-9 predictor is capable of representing the correct function, but it is also capable of representing infinitely many other functions that pass exactly through the training points, because we have more parameters than training examples. We have little chance of choosing a solution that generalizes well when so many wildly different solutions exist. In this example, the quadratic model is perfectly matched the true structure of the task so it generalizes well to new data.
Figure 5.2: We fit three models to this example training set. The training data was generated synthetically, by randomly sampling x values and choosing y deterministically by evaluating a quadratic function. (Left) A linear function fit to the data suffers from underfitting—it cannot capture the curvature that is present in the data. (Center) A quadratic function fit to the data generalizes well to unseen points. It does not suffer from a significant amount of overfitting or underfitting. (Right) A polynomial of degree 9 fit to the data suffers from overfitting. Here we used the Moore-Penrose pseudo-inverse to solve the underdetermined normal equations. The solution passes through all of the training points exactly, but we have not been lucky enough for it to extract the correct structure. It now has a deep valley in between two training points that does not appear in the true underlying function. It also increases sharply on the left side of the data, while the true function decreases in this area.
Here we have only described changing a model’s capacity by changing the number of input features it has (and simultaneously adding new parameters associated with those features). There are many other ways of controlling the capacity of a machine learning algorithm, which we will explore in the sections ahead. Many of these ideas date back to Occam’s razor (c. 1287-1347), also known as the principle of parsimony, which states that among competing hypotheses (here, read functions that could explain the observed data), one should choose the “simpler” one. This idea was formalized and made more precise in the 20th century by the founders of statistical learning theory (Vapnik and Chervonenkis, 1971; Vapnik, 1982; Blumer et al., 1989; Vapnik, 1995). This body of work provides various 102
CHAPTER 5. MACHINE LEARNING BASICS
Error
Underfitting zone
Overfitting zone
generalization error generalization gap
training error
optimal capacity
Capacity
Figure 5.3: Typical relationship between capacity (horizontal axis) and both training (bottom curve, dotted) and generalization (or test) error (top curve, bold). At the left end of the graph, training error and generalization error are both high. This is the underfitting regime. As we increase capacity, training error decreases, but the gap between training and generalization error increases. Eventually, the size of this gap outweighs the decrease in training error, and we enter the overfitting regime, where capacity is too large, above the optimal capacity.
means of quantifying model capacity and showing that the discrepancy between training error and generalization error is bounded by a quantity that grows with the ratio of capacity to number of training examples (Vapnik and Chervonenkis, 1971; Vapnik, 1982; Blumer et al., 1989; Vapnik, 1995). These bounds provide intellectual justification that machine learning algorithms can work, but they are rarely used in practice when working with deep learning algorithms. This is in part because the bounds are often quite loose, and in part because it can be quite difficult to determine the capacity of deep learning algorithms. Typically, training error decreases until it asymptotes to the minimum possible error value as model capacity increases (assuming your error measure has a minimum value). Typically, generalization error has a U-shaped curve as a function of model capacity. This is illustrated in Figure 5.3. Training and generalization error also vary as the size of the training set varies. See Fig. 5.4 for an illustration. The figure introduces the notion of parametric and non-parametric learning algorithms. Parametric ones have a fixed maximum capacity (their capacity can still be reduced by various means, such as a poor 103
CHAPTER 5. MACHINE LEARNING BASICS
optimization procedure), and they are called like this because they have a fixedsize parameter vetor. On the other hand, non-parametric learners are allowed to set their capacity based on the given data, i.e., the number of parameters is something that can be determined after the data is observed, and typically more data allows a greater capacity, i.e., more parameters. Note that it is possible for the model to have optimal capacity and yet still have a large gap between training and generalization error. In this situation, we can only reduce this gap by gathering more training examples. It’s worth mentioning that capacity is not just determined by which model we use. The model specifies which family of functions the learning algorithm can choose from when varying the parameters in order to reduce a training objective. This is called the representational capacity of the model. In many cases, finding the best function within this family is a very difficult optimization problem. In practice, the learning algorithm does not actually find the best function, just one that significantly reduces the training error. These additional restrictions mean that the model’s effective capacity may be less than its representational capacity.
5.4
The No Free Lunch Theorem
Although learning theory, sketched above, suggests that it is possible to generalize, one should consider a serious caveat, discussed here. Generally speaking, inductive reasoning, or inferring general rules from a limited set of examples, is not logically valid. To logically infer a rule describing every member of a set, one must have information about every member of that set. One may wonder then how the claims that machine learning can generalize well are logically valid. In part, machine learning avoids this problem by offering only probabilistic rules, rather than the entirely certain rules used in purely logical reasoning. Machine learning promises to find rules that are probably correct about most members of the set they concern. Unfortunately, even this does not resolve the entire problem. The no free lunch theorem for machine learning (Wolpert, 1996) states that, averaged over all possible data generating distributions, every classification algorithm has the same error rate when classifying previously unobserved points. In other words, in some sense, no machine learning algorithm is universally any better than any other. The most sophisticated algorithm we can conceive of has the same average performance (over all possible tasks) as merely predicting that every point belongs to the same class. Fortunately, these results hold only when we average over all possible data generating distributions. If we make assumptions about the kinds of probability distributions we encounter in real-world applications, then we can design learning 104
CHAPTER 5. MACHINE LEARNING BASICS
Generalization error at fixed capacity Asymptotic error at fixed (parametric) capacity Training error at fixed capacity
Optimal capacity Generalization error at optimal capacity
Asymptotic error at optimal (non-parametric) capacity
Number of training examples
Figure 5.4: This plot shows the effect of the dataset size on the train and test error of the model, as well as on the optimal model capacity. Note that the y-axis is used to show two different values which are not actually comparable—error, and capacity. If we choose a single model with fixed capacity (red), known as a parametric learning algorithm and retrain it with different amounts of training data, then the training error will increase as the size of the training set increases. This is because larger datasets are harder to fit. Simultaneously, the test error will decrease, because fewer incorrect hypotheses will be consistent with the training data. Ultimately the train and test error will converge. If we instead consider a learning algorithm that can adapt its capacity with training set size, then the optimal capacity (black) increases with the number of training examples, and reaches an asymptote which only depends on the required complexity for the task to be learned. This kind of learning algorithm is called non-parametric. The generalization error of the optimal capacity model (green) decreases and approaches an asymptote, called the Bayes error (the error made by an oracle that knows the data generating distribution). Usually the asymptotic error is greater than zero because there is some noise in the true distribution that the model is asked to capture.
105
CHAPTER 5. MACHINE LEARNING BASICS
algorithms that perform well on these distributions. This means that the goal of machine learning research is not to seek a universal learning algorithm or the absolute best learning algorithm. Instead, our goal is to understand what kinds of distributions are relevant to the “real world” that an AI agent experiences, and what kinds of machine learning algorithms perform well on data drawn from the kinds of data generating distributions we care about.
5.5
Regularization
The no free lunch theorem implies that we must design our machine learning algorithms to perform well on a specific task. We do so by building a set of preferences into the learning algorithm. When these preferences are aligned with the learning problems we ask the algorithm to solve, it performs better. So far, the only method of modifying a learning algorithm we have discussed is to increase or decrease the model’s capacity by adding or removing functions from the hypothesis space of solutions the learning algorithm is able to choose. We gave the specific example of increasing or decreasing the degree of a polynomial for a regression problem. The view we have described so far is oversimplified. The behavior of our algorithm is strongly affected not just by how large we make the set of functions allowed in its hypothesis space, but by the specific identity of those functions. The learning algorithm we have studied so far, linear regression, has a hypothesis space consisting of the set of linear functions of its input. These linear functions can be very useful for problems where the relationship between inputs and outputs truly is close to linear. They are less useful for problems that behave in a very non-linear fashion. For example, linear regression would not perform very well if we tried to use it to predict sin(x) from x. We can thus control the performance of our algorithms by choosing what kind of functions we allow them to draw solutions from, as well as by controlling the amount of these functions. We can also give a learning algorithm a preference for one solution in its hypothesis space to another. This means that both functions are eligible, but one is preferred. The unpreferred solution may only be chosen if it fits the training data significantly better than the preferred solution. For example, we can modify the training criterion for linear regression to include weight decay. To perform linear regression with weight decay, we minimize not only the mean squared error on the training set, but instead a criterion J (w) that expresses a preference for the weights to have smaller squared L2 norm. Specifically, J(w) = MSEtrain + λw >w, where λ is a value chosen ahead of time that controls the strength of our preference 106
CHAPTER 5. MACHINE LEARNING BASICS
for smaller weights. When λ = 0, we impose no preference, and larger λ forces the weights to become smaller. Minimizing J(w) results in a choice of weights that make a tradeoff between fitting the training data and being small. This gives us solutions that have a smaller slope, or put weight on fewer of the features. As an example of how we can control a model’s tendency to overfit or underfit via weight decay, we can train a high-degree polynomial regression model with different values of λ. See Fig. 5.5 for the results.
Figure 5.5: We fit a high-degree polynomial regression model to our example training set from Fig. 5.2. The true function is quadratic, but here we use only models with degree 9. We vary the amount of weight decay to prevent these high-degree models from overfitting. (Left) With very large λ, we can force the model to learn a function with no slope at all. This underfits because it can only represent a constant function. (Center) With a medium value of λ, the learning algorithm recovers a curve with the right general shape. Even though the model is capable of representing functions with much more complicated shape, weight decay has encouraged it to use a simpler function described by smaller coefficients. (Right) With weight decay approaching zero (i.e., using the Moore-Penrose pseudo-inverse to solve the underdetermined problem with minimal regularization), the degree-9 polynomial overfits significantly, as we saw in Fig. 5.2.
Expressing preferences for one function over another is a more general way of controlling a model’s capacity than including or excluding members from the hypothesis space. We can think of excluding a function from a hypothesis space as expressing an infinitely strong preference against that function. In our weight decay example, we expressed our preference for linear functions defined with smaller weights explicitly, via an extra term in the criterion we minimize. There are many other ways of expressing preferences for different solutions, both implicitly and explicitly. Together, these different approaches are known as regularization. Regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error. Regularization is one of the central concerns KEY of the field of machine learning, rivalled in its importance only by optimization. IDEA The no free lunch theorem has made it clear that there is no best machine learning algorithm, and in particular, no best form of regularization. Instead we 107
CHAPTER 5. MACHINE LEARNING BASICS
must choose a form of regularization that is well-suited to the particular task we want to solve. The philosophy of deep learning in general and this book in particular is that a very wide range of tasks (such as all of the intellectual tasks that people can do) may all be solved effectively using very general-purpose forms of regularization.
5.6
Hyperparameters, Validation Sets and Cross-Validation
Most machine learning algorithms have several settings that we can use to control the behavior of the learning algorithm. These settings are called hyperparameters. The values of hyperparameters are not adapted by the learning algorithm itself (though we can design a nested learning procedure where one learning algorithm learns the best hyperparameters for another learning algorithm). In the polynomial regression example we saw in Fig. 5.2, there is a single hyperparameter: the degree of the polynomial, which acts as a capacity hyperparameter. The λ value used to control the strength of weight decay is another example of a hyperparameter. Sometimes a setting is chosen to be a hyperparameter that the learning algorithm does not learn because it is difficult to optimize. More frequently, we do not learn the hyperparameter because it is not appropriate to learn that hyperparameter on the training set. This applies to all hyperparameters that control model capacity. If learned on the training set, such hyperparameters would always choose the maximum possible model capacity, resulting in overfitting (refer to Figure 5.3). For example, we can always fit the training set better with a higher degree polynomial, and a weight decay setting of λ = 0.. To solve this problem, we need a validation set of examples that the training algorithm does not observe. Earlier we discussed how a held-out test set, composed of examples coming from the same distribution as the training set, can be used to estimate the generalization error of a learner, after the learning process has completed. It is important that the test examples are not used in any way to make choices about the model, including its hyperparameters. For this reason, no example from the test set can be used in the validation set. For this reason, we always construct the validation set from the training data. Specifically, we split the training data into two disjoint subsets. One of these subsets is used to learn the parameters. The other subset is our validation set, used to estimate the generalization error during or after training, allowing for the hyperparameters to be updated accordingly. The subset of data used to learn the parameters is still typically called the training set, even though this may be confused with the larger pool of data used for the entire training process. 108
CHAPTER 5. MACHINE LEARNING BASICS
The subset of data used to guide the selection of hyperparameters is called the validation set. Since the validation set is used to “train” the hyperparameters, the validation set error will underestimate the test set error, though typically by a smaller amount than the training error. Typically, one uses about 80% of the data for training and 20% for validation. In practice, when the same test set has been used repeatedly to evaluate performance of different algorithms over many years, and especially if we consider all the attempts from the scientific community at beating the reported state-ofthe-art performance on that test set, we end up having optimistic evaluations with the test set as well. Benchmarks can thus become stale and then do not reflect the true field performance of a trained system. Thankfully, the community tends to move on to new (and usually more ambitious and larger) benchmark datasets.
5.6.1
Cross-Validation
One issue with the idea of splitting the dataset into train/test or train/validation/test subsets is that only a small fraction of examples are used to evaluate generalization. The consequence is that there is a lot of statistical uncertainty around the estimated average test error, making it difficult to claim that algorithm A works better than algorithm B on the given task. With large datasets with hundreds of thousands of examples or more, this is not a serious issue, but when the dataset is too small, there are alternative procedures, which allow one to use all of the examples in the estimation of the mean test error, at the price of increased computational cost. These procedures are based on the idea of repeating the training / testing computation on different randomly chosen subsets or splits of the original dataset. The most common of these is the k-fold cross-validation procedure, in which a partition of the dataset is formed by splitting it in k non-overlapping subsets. Then k train/test splits can be obtained by keeping each time the i-th subset as a test set and the rest as a training set. The average test error across all these k training/testing experiments can then be reported. One problem is that there exists no unbiased estimators of the variance of such average error estimators (Bengio and Grandvalet, 2004), but approximations are typically used. If model selection or hyperparameter optimization is required, things get more computationally expensive: one can recurse the k-fold cross-validation idea, inside the training set. So we can have an outer loop that estimates test error and provides a “training set” for a hyperparameter-free learner, calling it k times to “train”. That hyperparameter-free learner can then split its received training set by k-fold cross-validation into internal training/validation subsets (for example, splitting into k − 1 subsets is convenient, to reuse the same test blocks as the outer loop), call a hyperparameter-specific learner for each choice of hyperparam109
CHAPTER 5. MACHINE LEARNING BASICS
eter value on each of the training partition of this inner loop, and compute the validation error by averaging across the k − 1 validation sets the errors made by the k − 1 hyperparameter-specific learners trained on each of the internal training subsets.
5.7
Estimators, Bias, and Variance
The field of statistics gives us many tools that can be used to achieve the machine learning goal of solving a task not only on the training set but also to generalize. Foundational concepts such as parameter estimation, bias and variance are useful to formally characterize notions of generalization, underfitting and overfitting.
5.7.1
Point Estimation
Point estimation is the attempt to provide the single “best” prediction of some quantity of interest. In general the quantity of interest can be a single parameter or a vector of parameters in some parametric model, such as the weights in our linear regression example in Section 5.2, but it can also be a whole function. In order to distinguish estimates of parameters from their true value, our convention will be to denote a point estimate of a parameter θ by ˆ θ. (1) (m) Let {x , . . . , x } be a set of m independent and identically distributed (i.i.d.) data points. A point estimator is any function of the data: θˆm = g(x(1) , . . . , x (m)).
(5.2)
In other words, any statistic1 is a point estimate. Notice that no mention is made of any correspondence between the estimator and the parameter being estimated. There is also no constraint that the range of g(x (1) , . . . , x(m)) should correspond to that of the true parameter. This definition of a point estimator is very general and allows the designer of an estimator great flexibility. What distinguishes “just any” function of the data from most of the estimators that are in common usage is their properties. For now, we take the frequentist perspective on statistics. That is, we assume that the true parameter value θ is fixed but unknown, while the point estimate ˆ θ is a function of the data. Since the data is drawn from a random process, any function of the data is random. Therefore ˆ θ is a random variable. Point estimation can also refer to the estimation of the relationship between input and target variables. We refer to these types of point estimates as function estimators. 1
A statistic is a function of the data, typically of the whole training set, such as the mean.
110
CHAPTER 5. MACHINE LEARNING BASICS
Function Estimation As we mentioned above, sometimes we are interested in performing function estimation (or function approximation). Here we are trying to predict a variable (or vector) y given an input vector x (also called the covariates). We consider that there is a function f (x) that describes the relationship between y and x. For example, we may assume that y = f (x)+ , where stands for the part of y that is not predictable from x. In function estimation, we are interested in approximating f with a model or estimate fˆ. Note that we are really not adding anything new here to our notion of a point estimator, the function estimator fˆ is simply a point estimator in function space. The linear regression example we discussed above in Section. 5.2 and the polynomial regression example discussed in Section. 5.3 are both examples of function estimation where we estimate a model fˆ of the relationship between an input x and target y.
In the following we will review the most commonly studied properties of point estimators and discuss what they tell us about these estimators. As ˆ θ and fˆ are random variables (or vectors, or functions), they are distributed according to some probability distribution. We refer to this distribution as the sampling distribution. When we discuss properties of the estimator, we are really describing properties of the sampling distribution.
5.7.2
Bias
The bias of an estimator is defined as: bias( ˆ θm ) = E(θˆm ) − θ
(5.3)
where the expectation is over the data (seen as samples from a random variable) and θ is the true underlying value of θ according to the data generating distribution. An estimator ˆ θm is said to be unbiased if bias(θˆm ) = 0, i.e., if E(θˆm ) = θ. An estimator ˆ θm is said to be asymptotically unbiased if lim m→∞ bias( ˆ θm ) = 0, ˆ i.e., if lim m→∞ E( θm ) = θ. Example: Bernoulli Distribution Consider a set of samples {x(1) , . . . , x(m) } that are independently and identically distributed according to a Bernoulli distribution, x(i) ∈ {0, 1}, where i ∈ [1, m]. The Bernoulli p.m.f. (probability mass (i) (i) function, or probability function) is given by P (x (i); θ) = θ x (1 − θ)(1−x ) . m (i) is biased. We are interested in knowing if the estimator ˆ θm = m1 i=1 x 111
P
CHAPTER 5. MACHINE LEARNING BASICS
bias(ˆ θ m) = E[ ˆ θm ] − θ " # m 1 X (i) −θ =E x m i=1 m h i X 1 = E x(i) − θ m i=1 m
1 1 X X (i) x (i) (i) = x θ (1 − θ) (1−x ) − θ m (i)
=
1 m
i=1 x =0 m X i=1
(θ) − θ
=θ−θ=0
Since bias(ˆ θ) = 0, we say that our estimator ˆ θ is unbiased. Example: Gaussian Distribution Estimator of the Mean Now, consider a set of samples {x (1) , . . . , x (m)} that are independently and identically distributed according to a Gaussian (Normal) distribution (x (i) ∼ Gaussian(µ, σ2), where i ∈ [1, m]). The Gaussian density function) is given by p.d.f. (probability (x(i) −µ)2
1 −12 σ2 p(x (i) ; µ, σ2) = √ 2πσ . 2 exp A common estimator of the Gaussian mean parameter is known as the sample mean: m 1 X (i) µ ˆm = x (5.4) n i=1
To determine the bias of the sample mean, we are again interested in calculating its expectation: bias(ˆ µm ) = E[ˆ µ m] − µ " # m X 1 =E x (i) − µ m i=1 ! m 1 X h (i) i −µ = E x m i=1 ! m X 1 = µ −µ m i=1
= µ−µ = 0 112
CHAPTER 5. MACHINE LEARNING BASICS
Thus we find that the sample mean is an unbiased estimator of Gaussian mean parameter. Example: Gaussian Distribution Estimators of the Variance Sticking with the Gaussian family of distributions. We consider two different estimators of the variance parameter σ 2. We are interested in knowing if either estimator is biased. The first estimator of σ 2 we consider is known as the sample variance: σ ˆm2
m 2 1 X (i) = x −µ ˆm , m
(5.5)
i=1
where µ ˆ m is the sample mean, defined above. More formally, we are interested in computing bias(ˆ σ2m ) = E[ˆ σ2m] − σ 2 We now simplify the term E[ˆ σ2m ] " # m 2 X 1 ˆm E[ˆ σ 2m] = E x (i) − µ m i=1 " # m 1 X (i) 2 =E (x ) − 2x(i) µ ˆm + µ ˆ2m m i=1 ! m m m m h i X X X X 1 1 1 1 = E (x (i))2 − 2E x (i) x (j) + E x(j) x (k) m m m m i=1 j=1 j=1 k=1 h m m i 2X h i 1 X 2 1 X h (j) 2 i (i) 2 (i) (j) = 1− + 2 E (x ) − E x x E (x ) m i=1 m m j6 m =i j=1 m 1 X X h (j) (k) i + 2 E x x m j=1 k6 =j m 2(m − 1) (m − 1) 1 X m−2 1 = ( )(µ 2 + σ 2 ) − (µ2 ) + (µ 2 + σ 2) + (µ 2 ) m m m m m i=1
=
m−1 2 σ m
So the bias of σ ˆ 2m ) = −σ2/m, and therefore the sample variance is a biased estimator.
113
CHAPTER 5. MACHINE LEARNING BASICS
We now consider a modified estimator of the variance sometimes called the unbiased sample variance: σ ˜ 2m =
m 2 1 X (i) ˆm x −µ m − 1 i=1
(5.6)
As the name suggests this estimator is unbiased, that is, we find that E[˜ σm2 ] = σ2 : " # m 2 X 1 ˆm E[˜ σ 2m ] = E x (i) − µ m − 1 i=1 m = E[ˆ σm2 ] m−1 m m−1 2 = σ m−1 m = σ2 . We have two estimators: one is biased and the other is not. While unbiased estimators are clearly desirable, they are not always the “best” estimators. As we will see we often use biased estimators that possess other important properties.
5.7.3
Variance
Another property of the estimator that we might want to consider is how much we expect it to vary as a function of the data sample. Just as we computed the expectation of the estimator to determine its bias, we can compute its variance. ˆ2 Var( ˆ θ) = E[θˆ2 ] − E[θ]
(5.7)
The variance of an estimator provides a measure of how we would expect the estimate we compute from data to vary as we independently resample the dataset from the underlying data generating process. Just as we might like an estimator to exhibit low bias we would also like it to have relatively low variance. We can also define the standard error (se) of the estimator as q ˆ ˆ se( θ) = Var[θ] (5.8) Example: Bernoulli Distribution Let’s once again consider a set samples ({x (1), . . . , x(m)}) drawn independently and identically from a Bernoulli distri(i) (i) bution (recall P (x(i) ; θ) = θx (1 − θ)(1−x )). This time we are interested in
114
CHAPTER 5. MACHINE LEARNING BASICS
computing the variance of the estimator θˆm =
1 m
ˆ Var θm = Var
Pm
1 m m X
x(i) . !
i=1
x(i)
i=1
m 1 X = 2 Var x(i) m i=1
m 1 X θ(1 − θ) = 2 m i=1
1 mθ(1 − θ) m2 1 = θ(1 − θ) m Note that the variance of the estimator decreases as a function of m, the number of examples in the dataset. This is a common property of popular estimators that we will return to when we discuss consistency (see Sec. 5.7.5). =
Example: Gaussian Distribution Estimators of the Variance We again consider a set of samples {x (1), . . . , x (m) } independently and identically distributed according to a Gaussian distribution (x(i) ∼ Gaussian(µ, σ 2 ), where i ∈ [1, m]). We now consider the variance of the two estimators of the variance: the sample variance, m 2 1 X (1) 2 σ ˆm = x −µ (5.9) ˆm , m i=1
and the unbiased sample variance, 2 σ ˜m
m 2 1 X (1) = x −µ ˆm . m−1
(5.10)
i=1
In order to determine the variance of these estimators we will take advantage of a known relationship between the sample variance and the Chi Squared distribution, ppecifically, that m−1 σ ˆ 2 happens to be χ 2 distributed. We can then use σ2 this together with the fact that the variance of a χ2 random variable with m − 1 degrees of freedom is 2(m − 1). m−1 2 Var σ ˜ = 2(m − 1) σ2 2 (m − 1)2 Var σ ˜ = 2(m − 1) σ4 2σ 4 Var σ ˜2 = (m − 1) 115
CHAPTER 5. MACHINE LEARNING BASICS
By noticing that σ ˆ2 =
and using σ ˜ 2’s relationship to the χ 2 distribution, 2 2(m−1)σ 4 it is straightforward to show that Var σ ˆ = m2 . 2 2 2 m To derive this last relation, we used the fact that that Var σ ˜ = m−1 Var σ ˆ , that is Var σ ˜ 2 > Var σ ˆ 2 . So while the bias of σ˜2 is smaller than the bias of σ ˆ 2 , the variance of σ˜2 is greater.
5.7.4
m−1 2 ˜ , m σ
Trading off Bias and Variance and the Mean Squared Error
Bias and variance measure two different sources of error in an estimator. Bias measures the expected deviation from the true value of the function or parameter. Variance on the other hand, provides a measure of the deviation from the true value that any particular sampling of the data is likely to cause. What happens when we are given a choice between two estimators, one with more bias and one with less variance? How do we choose between them? For example, let’s imagine that we are interested in approximating the function shown in Fig. 5.2 and we are only offered the choice between a model with large bias and one that suffers from large variance. How do we choose between them? In machine learning, perhaps the most common and empirically successful way to negotiate this kind of trade-off, in general is by cross-validation, discussed in Section 5.6.1. Alternatively, we can also compare the mean squared error (MSE) of the estimates: MSE = E[ ˆ θ n− θ] 2 = Bias(θˆn )2 + Var( ˆ θn)
(5.11)
The MSE measures the overall expected deviation—in a squared error sense— between the estimator and the true value of the parameter θ. As is clear from Eq. 5.11, evaluating the MSE incorporates both the bias and the variance. Desirable estimators are those with small MSE and these are estimators that manage to keep both their bias and variance somewhat in check. The relationship between bias and variance is tightly linked to the machine learning concepts of capacity, underfitting and overfitting discussed in Section. 5.3. In the case where generalization error is measured by the MSE (where bias and variance are meaningful components of generalization error), increasing capacity tends to increase variance and decrease bias. This is illustrated in Figure 5.6, where we see again the U-shaped curve of generalization error as a function of of capacity, as in Section 5.3 and Figure 5.3. Example: Gaussian Distribution Estimators of the Variance In the last section we saw that when we compared the sample variance, σ ˆ 2, and the unbiased sample variance, σ ˜ 2 , we see that while σ ˆ 2 has higher bias, σ ˜ 2 has higher variance. 116
CHAPTER 5. MACHINE LEARNING BASICS
Underfitting zone
bias
Overfitting zone
generalization error optimal capacity
variance Capacity
Figure 5.6: As capacity increases (x-axis), bias (dotted) decreases and variance (dashed) increases, yielding another U-shaped curve for generalization error (bold curve). If we vary capacity along one axis, there is an optimal capacity, with underfitting when the capacity is below this optimum and overfitting when it is above.
The mean squared error offers a way of balancing the tradeoff between bias and variance and suggest which estimator we might prefer. For σ ˆ 2, the mean squared error is given by: MSE(ˆ σ2m) = Bias(ˆ σ 2m )2 + Var(ˆ σ 2m) 2 −σ2 2(m − 1)σ4 = + m m2 1 + 2(m − 1) 4 = σ m2 2m − 1 = σ4 m2
(5.12) (5.13) (5.14) (5.15)
The mean squared error of the unbiased alternative is given by: MSE(˜ σ 2m) = Bias(˜ σ 2m)2 + Var(˜ σ 2m ) 2σ 4 =0+ (m − 1) 2 = σ 4. (m − 1)
(5.16) (5.17) (5.18)
Comparing the two, we see that the MSE of the unbiased sample variance, σ ˜m2 , is actually higher than the MSE of the (biased) sample variance, σ ˆm2 . This implies that despite incurring bias in the estimator σ ˆ 2m, the resulting reduction in variance more than makes up for the difference, at least in a mean squared sense. 117
CHAPTER 5. MACHINE LEARNING BASICS
5.7.5
Consistency
As we have already discussed, sometimes we may wish to choose an estimator that is biased. For example, in order to minimize the variance of the estimator. However we might still wish that, as the number of data points in our dataset increases, our point estimates converge to the true value of the parameter. More p formally, we would like that limn→∞ ˆ θ n → θ.2 This condition is known as consistency 3 and ensures that the bias induced by the estimator is assured to diminish as the number of data examples grows. Asymptotic unbiasedness is not equivalent to consistency. For example, consider estimating the mean parameter µ of a normal distribution N (µ, σ 2), with a dataset consisting of n samples: {x 1, . . . , x n }. We could use the first sample x 1 of the dataset as an unbiased estimator: ˆ θ = x 1, In that case, E( ˆ θn) = θ so the estimator is unbiased no matter how many data points are seen. This, of course, implies that the estimate is asymptotically unbiased. However, this is not a consistent estimator as it is not the case that θˆn → θ as n → ∞.
5.8
Maximum Likelihood Estimation
In the previous section we discussed a number of common properties of estimators but we never mentioned where these estimators come from. In this section, we discuss one of the most common approaches to deriving estimators: via the maximum likelihood principle. Consider a set of m independent examples X = (x(1) , . . . , x(m) ) with x(i) ∼ P (x) independently, where P (x) is the true but unknown data generating distribution. More generally, the data may not need to be sampled independently, so we have a data generating process which produces the sequence X, i.e., X ∼ P (X). Consider a family of probability functions P , parameterized by θ, over the same space, i.e., P (x; θ) maps any configuration x to a real number estimating the true probability P (x), or more generally (in the non-independent case), we have a P (X; θ) that returns the probability of any whole sequence X. The maximum likelihood estimator for θ is then defined as θML = arg max P (X; θ).
(5.19)
θ
p θn − θ| > The symbol → means that the convergence is in probability, i.e. for any > 0, P (| ˆ ) → 0 as n → ∞. 3 This is sometime referred to as weak consistency, with strong consistency referring to the almost sure convergence of θˆ to θ. 2
118
CHAPTER 5. MACHINE LEARNING BASICS
In the i.i.d. scenario, because of the i.i.d. assumptions, we can rewrite Pθ (X) =
n Y
P (x(i) ; θ).
(5.20)
i=1
Note that sometimes we make our model assume that the examples are i.i.d. even though we know they are not, because it simplifies the model (which sometimes mean that better generalization can be achieved), so this is a modeling choice and not necessarily an assumption on the true data generating process. Combining the above two equations and noting that the logarithm of the arg max is the arg max of the logarithm, we obtain the ordinary maximum likelihood estimator under a model that assumes i.i.d. examples: θ ML = arg max θ
m X
log P (x(i); θ).
(5.21)
i=1
This formulation is convenient because it corresponds to an objective function that is additive in the examples, something that is exploited in numerical optimization methods such as stochastic gradient descent (Section 8.3.2), which is heavily used for deep learning. In practice, we will often use numerical optimization to approximately maximize the likelihood, so we will not have the true maximum likelihood estimator, but something that approximates it. There is an interesting connection between the objective function for maximum likelihood, on the right hand side of Eq. 5.21, and the notion KL divergence introduced in Section 3.9 with Eq. 3.3. The KL divergence compares a candidate distribution Q with a target distribution P . If we replace Q by the empirical distribution, we obtain the average negative log-likelihood plus the entropy of Q, which is a constant as far as P is concerned: Q(x) D KL(QkP ) = E x∼Q log (5.22) P (x) = E x∼Q[− log P (x)] + E x∼Q[log Q(x)] (5.23) (5.24) = E x∼Q[− log P (x)] − H[Q]
(5.25)
m
1 X =− log P (x(i)) − log m m
(5.26)
i=1
where Eq. 5.22 is the definition of KL divergence, Eq. 5.23 splits it into two terms, the first one being the cross entropy between Q and P (with Q as the reference) and second one being the entropy of Q (see Eq. 3.2). Eq. 5.25 is then obtained by taking Q as the empirical distribution (Eq. 3.4), and we obtain Eq. 5.26, which is 119
CHAPTER 5. MACHINE LEARNING BASICS
the average negative log-likelihood plus a constant. Hence maximizing likelihood is minimizing the cross entropy between the empirical distribution and the model as well as minimizing the KL divergence between these two distributions. Note that when we make Q the data generating distribution, we obtain the generalization or expected negative log-likelihood.
5.8.1
Conditional Log-Likelihood and Mean Squared Error
The maximum likelihood estimator can readily be generalized to the case where our goal is not to estimate a probability function but rather a conditional probability, e.g., P (y | x; θ), to predict y given x. This is actually the most common situation where we do supervised learning (Section 5.10), i.e., the examples are pairs (x, y). If X represents all our inputs and Y all our observed targets, then the conditional maximum likelihood estimator is θML = arg max P (Y | X; θ).
(5.27)
θ
If the examples are assumed to be i.i.d., then this can be decomposed into θML = arg max θ
m X i=1
log P (y (i) | x(i) ; θ).
(5.28)
Example: Linear Regression Let us consider as an example the special case of linear regression, introduced earlier in Section 5.2. In that case, the conditional density of y, given x = x, is a Gaussian with mean µ(x) that is a learned function of x, with unconditional variance σ 2. Since the examples are assumed to be i.i.d., the conditional log-likelihood (Eq. 5.27) becomes log P (Y | X; θ) =
m X i=1
(i)
log P (y
m X −1 m (i) (i) 2 | x ; θ) = ||ˆ y −y || −m log σ− log(2π) 2 2σ 2 i=1 (i)
where ˆ y(i) = µ(x(i) ) is the output of the linear regression on the i-th input x (i) and m is the dimension of the y vectors. Comparing the above with the mean squared error (Section 5.2) we immediately see that if σ is fixed, maximizing the above is equivalent (up to an additive and a multiplicative constant that do not change the value of the optimal parameter) to minimizing the training set mean squared error, i.e., m 1 X M SEtrain = y (i) − y(i) ||2 . ||ˆ m i=1
Note that the MSE is an average rather than a sum, which is more practical from a numerical point of view (so you can compare MSEs of sets of different sizes 120
CHAPTER 5. MACHINE LEARNING BASICS
more easily). In practice, researchers reporting log-likelihoods and conditional log-likelihoods also tend to report the per-example average log-likelihood, for the very same reason. The exponential of the average log-likelihood is also called the perplexity and is used in language modeling applications. Whereas in the case of linear regression we have µ(x) = w·x, the above equally applies to other forms of regression, e.g., with a neural network predicting with µ(x) the expected value of y given x.
5.8.2
Properties of Maximum Likelihood
The main appeal of the maximum likelihood estimator is that it can be shown to be the best estimator asymptotically, as the number of examples m → ∞, in terms of its rate of convergence as m increases. The maximum likelihood estimator has the property of consistency (see Sec. 5.7.5 above), i.e., as more training are considered, the estimator converges to the best one in some sense. There are other inductive principles besides the maximum likelihood estimator, many of which share the property of being consistent estimators. However, there is the question of how many training examples one needs to achieve a particular generalization error, or equivalently what estimation error one gets for a given number of training examples, also called efficiency. This is typically studied in the parametric case (like in linear regression) where our goal is to estimate the value of a parameter (and assuming it is possible to identify the true parameter), not the value of a function. A way to measure how close we are to the true parameter is by the expected mean squared error, computing the squared difference between the estimated and true parameter values, where the expectation is over m training samples from the data generating distribution. That parametric mean squared error decreases as m increases, and for m large, the Cram´er-Rao lower bound (Rao, 1945; Cram´er, 1946) shows that no consistent estimator has a lower mean squared error than the maximum likelihood estimator. For these reasons (consistency and efficiency), the maximum likelihood induction principle is often considered the preferred one in machine learning, modulo slight adjustments such as described in the next Section, to better deal with the non-asymptotic case where the number of examples is small enough to yield overfitting behavior.
5.9
Bayesian Statistics and Prior Probability Distributions
So far we have discussed approaches based on estimating a single value of θ, then making all predictions thereafter based on that one estimate. Another approach is 121
CHAPTER 5. MACHINE LEARNING BASICS
to consider all possible values of θ when making a prediction. Bayesian statistics provides a natural and theoretically elegant way to carry out this approach. Historically, statistics has become divided between two communities. One of these communities is known as frequentist statistics or orthodox statistics. The other is known as Bayesian statistics. The difference is mainly one of world view but can have important practical implications. As discussed in Sec. 5.7.1, the frequentist perspective is that the true parameter value θ is fixed but unknown, while the point estimate θˆ is a random variable on account of it being a function of the data (which are seen as random). The Bayesian perspective on statistics is quite different and, in some sense, more intuitive. The Bayesian uses probability to reflect degrees of certainty of states of knowledge. The data is directly observed and so is not random. On the other hand, the true parameter θ is unknown or uncertain and thus is represented as a random variable. Before observing the data, we represent our knowledge of θ using the prior probability distribution, p(θ) (sometimes referred to as simply ’the prior’). Generally, the prior distribution is quite broad (i.e. with high entropy) to reflect a high degree of uncertainty in the value of θ before observing any data. For example, we might assume a priori that θ lies in some finite range or volume, with a uniform distribution. Many priors instead reflect a preference for “simpler” solutions (such as smaller magnitude coefficients, or a function that is closer to being constant). Now consider that we have a set of data samples {x(1), . . . , x (m)}. We can recover the effect of data on our belief about θ by combining the data likelihood p(x (1), . . . , x(m) | θ) with the prior via Bayes’ rule: (1)
p(θ | x , . . . , x
p(x (1), . . . , x(m) | θ)p(θ) )= p(x(1) , . . . , x(m) )
(m)
(5.29)
If the data is at all informative about the value of θ, the posterior distribution p(θ | x(1) , . . . , x(m)) will have less entropy (will be more ‘peaky’) than the prior p(θ). Relative to maximum likelihood estimation, Bayesian estimation offers two important differences. First, unlike the maximum likelihood point estimate of θ, the Bayesian makes decision with respect to a full distribution over θ. For example, after observing m examples, the predicted distribution over the next data sample, x (m+1), is given by Z (m+1) (1) (m) | x , . . . , x ) = p(x (m+1) | θ)p(θ | x(1), . . . , x (m)) dθ p(x (5.30) Here each value of θ with positive probability density contributes to the prediction of the next example, with the contribution weighted by the posterior density itself. 122
CHAPTER 5. MACHINE LEARNING BASICS
After having observed {x(1) , . . . , x (m)}, if we are still quite uncertain about the value of θ, then this uncertainty is incorporated directly into any predictions we might make. In Sec. 5.7, we discussed how the frequentist statistics addresses the uncertainty in a given point estimator of θ by evaluating its variance. The variance of the estimator is an assessment of how the estimate might change will alternative samplings of the observed (or training) data. The Bayesian answer to the question of how to deal with the uncertainty in the estimator is to simply integrate over it, which tends to protect well against overfitting. The second important difference between the Bayesian approach to estimation and the Maximum Likelihood approach is due to the contribution of the Bayesian prior distribution. The prior has an influence by shifting probability mass density towards regions of the parameter space that are preferred a priori. In practice, the prior often expresses a preference for models that are simpler or more smooth. One important effect of the prior is to actually reduce the uncertainty (or entropy) in the posterior density over θ. We have already noted that combining the prior, p(θ), with the data likelihood p(x (1), . . . , x(m) | θ) results in a distribution that is less entropic (more peaky) than the prior. This is just the result of a basic property of probability distributions: Entropy(product of two densities) ≤ Entropy(either density). This implies that the posterior density on θ is also less entropic than the data likelihood alone (when viewed and normalized as a density over θ). The hypothesis space with the Bayesian approach is, to some extent, more constrained than that with an ML approach. Thus we expect a contribution of the prior to be a further reduction in overfitting as compared to ML estimation. Example: Linear Regression Here we consider the Bayesian estimation approach to learning the linear regression parameters. In linear regression, we learn a linear mapping from an input vector x ∈ R n to predict the value of a scalar y ∈ R. The prediction is parametrized by the vector w ∈ R n : yˆ = w> x.
Given a set of m training samples (X (train), y(train) ), we can express the prediction of y over the entire training set as: yˆ(train) = X(train) w. Expressed as a Gaussian conditional distribution on y (train) , we have p(y(train) | X (train), w) = N (y (train); X (train)> w, I) ∝ exp
1 − (y (train) − X (train) w)> (y(train) − X (train) w) , 2 123
CHAPTER 5. MACHINE LEARNING BASICS
where we will follow the standard MSE formulation in assuming that the Gaussian variance on y is one. In what follows, to reduce the notational burden, we refer to (X (train), y (train)) as simply (X, y). To determine the posterior distribution over the model parameter vector w, we first need to specify a prior distribution. The prior should reflect our naive belief about the value of these parameters. While it is sometimes difficult or unnatural to express our prior beliefs in terms of the parameters of the model, in practice we typically assume a fairly broad distribution expressing a high degree of uncertainty about θ in our prior belief. For real-valued parameters it is common to use a Gaussian as a prior distribution: 1 > −1 p(w) = N (w; µ0 , Λ0 ) ∝ exp − (w − µ0 ) Λ 0 (w − µ0 ) 2 where µ0 and Λ0 are the prior distribution mean vector and covariance matrix (inverse of covariance matrix) respectively. 4 With the prior thus specified, we can now proceed in determining the posterior distribution over the model parameters. p(w | X, y) ∝ p(y | X, w)p(w) 1 1 −1 > > ∝ exp − (y − Xw) (y − Xw) exp − (w − µ0 ) Λ0 (w − µ0) 2 2 1 > > > > −1 > −1 ∝ exp − −2y Xw + w X Xw + w Λ0 w − 2µ0 Λ0 w 2 > −1 We now make the substitutions Λm = X X + Λ −1 and 0 > −1 µm = Λ m X y + Λ 0 µ 0 into the derivation of the posterior (and complete the square) to get: 1 1 > −1 > −1 p(w | X, y) ∝ exp − (w − µm ) Λ m (w − µ m ) + µ m Λm µm (5.31) 2 2 1 > −1 ∝ exp − (w − µm ) Λ m (w − µ m ) . (5.32) 2 In the above, we have dropped all terms that do not include the parameter vector w. In Eq. 5.32, we recognize that the posterior distribution has the form of a Gaussian distribution with mean vector µm and covariance matrix Λ m. It is interesting to note that this justifies our dropping all terms unrelated to w, since we know that the posterior distribution must be normalized and, as a Gaussian, 4
Unless there is a reason to assume a particular covariance structure, we typically assume a diagonal covariance matrix Λ 0 = diag(λ 0). 124
CHAPTER 5. MACHINE LEARNING BASICS
we know what that normalization constant must be (where n is the dimension of the input): 1 1 (train) (train) > −1 p(w | X ,y )= p exp − (w − µ m) Λ m (w − µ m ) . 2 (2π)n |Λ m| (5.33)
5.9.1
Maximum A Posteriori (MAP) Estimation
While, in principle, we can use the full Bayesian posterior distribution over the parameter θ as our estimate of this parameter, it is still often desirable to have a single point estimate (for example, most operations involving the Bayesian posterior for most interesting models are intractable and must be heavily approximated). Rather than simply returning to the maximum likelihood estimate, we can still gain some of the benefit of the Bayesian approach by allowing the prior to influence the choice of the point estimate. One rational way to do this is to choose the maximum a posteriori (MAP) point estimate. The MAP estimate chooses the point of maximal posterior probability (or maximal probability density in the more common case of continuous θ). θMAP = arg max p(θ | x) = arg max log p(x | θ) + log p(θ) θ
(5.34)
θ
We recognize, above on the right hand side, log p(x | θ), i.e. the standard loglikelihood term and log p(θ) corresponding to the prior distribution. As discussed above the advantage brought by introducing the influence of the prior on the MAP estimate is to leverage information other than that contained in the training data. This additional information helps to reduce the variance in the MAP point estimate (in comparison to the ML estimate). However, it does so at the price of increased bias. Example: Regularized Linear Regression We discussed above the Bayesian approach to linear regression. Given a set of m training samples of input output pairs: (X (train), y (train)), we can express the prediction of y over the entire training set as: yˆ(train) = X(train) w. where prediction is parametrized by the vector w ∈ Rn . Recall from Sec. 5.8.1 that the maximum likelihood estimate for the model parameters is given by: w ˆ ML = (X (train)>X(train) )−1 X (train)> y (train) 125
(5.35)
CHAPTER 5. MACHINE LEARNING BASICS
For the sake of comparison to the maximum likelihood solution, we will make the simplifying assumption that the prior covariance matrix is scalar: Λ0 = λ0 I. As mentioned previously, in practice, this is a very common form of prior distribution. We will also assume that µ 0 = 0. This is also a very common assumption in practice and corresponds to acknowledging that a priori, we do not know if the features of x have a positive or negative correlation with y. Adding these assumptions, the MAP estimate of the model parameters (corresponding to the mean of the Gaussian posterior density, in Eq. 5.32) becomes: wˆMAP = Λm X (train)> y(train)
(5.36)
where µ0 and Λ0 are the prior mean and covariance respectively and Λ m is the posterior covariance and is given by: −1 Λm = X(train)> X (train) + λ−1 I 0
(5.37)
Var(wML ) = (X (train)> X (train))−1
(5.38)
Comparing Eqs. 5.35 and 5.36, we see that the MAP estimate amounts to a weighted combination of the prior maximum probability value, µ0 , and the ML estimate. As the variance of the prior distribution tends to infinity, the MAP estimate reduces to the ML estimate. As the variance of the prior tends to zero, the MAP estimate tends to zero (actually it tends to µ0 which here is assumed to be zero). We can make the model capacity tradeoff between the ML estimate and the MAP estimate more explicit by analyzing the bias and variance of these estimates. It is relatively easy to show that the ML estimate is unbiased, i.e. that E[w ˆ ML] = w and that it has a variance given by:
In order to derive the bias of the MAP estimate, we need to calculate the expectation: E[w ˆ MAP] = E[ΛmX (train)> y(train) ] h i (train)> (train) = E Λm X X w+ = Λ m X(train)>X (train)w + Λm X(train)> E [] −1 = X (train)> X(train) + λ−1 I X (train)>X (train) w, 0
(5.39)
We see that while the expected value of the ML estimate is the true parameter value w (i.e. the parameters that we assume generated the data); the expected 126
CHAPTER 5. MACHINE LEARNING BASICS
value of the MAP estimate is a weighted average of w and the prior mean µ. We comput the bias as: Bias( w ˆMAP ) = E[ w ˆMAP ] − w −1 (train)> (train) = − λ0 X X +I w.
Since the bias is not zero, we can conclude that the MAP estimate is biased, and as expected we can see that as the variance of the prior λ0 → ∞, the bias tends to zero. As the variance of the prior λ 0 → 0, the bias tends to w. h i h i2 In order to compute the variance, we use the identity Var( ˆ θ) = E ˆ θ 2 −E ˆ θ . ˆ MAP w ˆ> So before computing the variance we need to compute E w M AP : h i h i (train)>y (train)y (train)> X(train) Λ ˆ MAP w ˆ> = E Λ X ˆ E w m m M AP > (train)> (train) (train) (train) = E Λ mX X w+ X w+ X Λm = ΛmX (train)> X (train) ww> X (train)> X(train)Λm h i + Λ m X (train)> E > X (train) Λm = ΛmX (train)> X (train) ww> X (train)> X(train)Λm + Λ m X (train)> X(train) Λ m ˆMAP ]E[w ˆ MAP ]> + X (train)> X(train) Λ m = E[w With E w ˆ MAPw ˆ> M AP thus computed, the variance of the MAP estimate of our linear regression model is given by: h i h i > ˆMAP ) = E w ˆ M AP w ˆ> ˆ ˆ Var(w − E [ w ] E w M AP M AP M AP
= E[ w ˆMAP ]E[ w ˆMAP] > + Λ m X(train)> X (train)Λm − E[ w ˆMAP ]E[wˆ MAP] > = Λm X (train)>X (train) Λm −1 (train)> (train) −1 = X X + λ0 I X (train)> X(train) −1 × X (train)> X (train) + λ −1 I 0
(5.40)
It is perhaps difficult to compare Eqs. 5.38 and 5.40. But if we assume that w is one-dimensional (along with x), it becomes a bit easier to see that, as long P as λ 0 is bounded, then Var(w ˆ ML) =
Pm1
127
> Var(w ˆMAP ) =
λ0
m i=1
x 2i
. (1+λ0 i=1x2i ) 2 From the above analysis we can see that the role of the prior in the MAP estimate is to trade increased bias for a reduction in variance. The goal, of 2 i=1 xi
Pm
CHAPTER 5. MACHINE LEARNING BASICS
course, is to try to avoid overfitting. The incurred bias is a consequence of the reduction in model capacity caused by limiting the space of hypotheses to those with significant probability density under the prior. Many regularized estimation strategies, such as maximum likelihood learning regularized with weight decay, can be interpreted as making the MAP approximation to Bayesian inference. This view applies when the regularization consists of adding an extra term to the objective function that corresponds to log p(θ). Not all such regularizer terms correspond to MAP Bayesian inference. For example, some regularizer terms may not be the logarithm of a probability distribution. Other regularization terms depend on the data, which of course a prior probability distribution is not allowed to do.
5.10
Supervised Learning
Supervised learning algorithms are, roughly speaking, learning algorithms that learn to associate some input with some output, given a training set of examples of inputs x and outputs y.
5.10.1
Probabilistic Supervised Learning
Most supervised learning algorithms in this book are based on estimating a probability distribution p(y | x). We can do this simply by using maximum conditional likelihood estimation (Sec. 5.8.1) – or just maximum likelihood for short – to find the best parameter vector θ for a parametric family of distributions p(y | x; θ). We have already seen that linear regression corresponds to the family p(y | x; θ) = N (y | θ >x, I). We can generalize linear regression to the classification scenario by defining a different family of probability distributions. If we have two classes, class 0 and class 1, then we need only specify the probability of one of these classes. The probability of class 1 determines the probability of class 0, because these two values must add up to 1. The normal distribution over real-valued numbers that we used for linear regression is parameterized in terms of a mean. Any value we supply for this mean is valid. A distribution over a binary variable is slightly more complicated, because its mean must always be between 0 and 1. One way to solve this problem is to use the logistic sigmoid function to squash the output of the linear function into the interval (0, 1) and interpret that value as a probability: p(y = 1 | x; θ) = σ(θ> x). This approach is known as logistic regression (a somewhat strange name since we use the model for classification rather than regression). 128
CHAPTER 5. MACHINE LEARNING BASICS
In the case of linear regression, we were able to find the optimal weights by solving the normal equations. Logistic regression is somewhat more difficult. There is no closed-form solution for its optimal weights. Instead, we must search for them by maximizing the log-likelihood. We can do this by minimizing the negative log-likelihood (NLL) using gradient descent. This same strategy can be applied to essentially any supervised learning problem, by writing down a parametric family of probability of conditional distributions over the right kind of input and output variables.
5.10.2
Support Vector Machines
One of the most influential approaches to supervised learning is the support vector machine (Boser et al., 1992; Cortes and Vapnik, 1995). This model is similar to logistic regression in that it is driven by a linear function w > x+b. Unlike logistic regression, the support vector machine does not provide probabilities, but only outputs a class identity. One key innovation associated with support vector machines is the kernel trick. The kernel trick consists of observing that many machine learning algorithms can be written exclusively in terms of dot products between examples. For example, it can be shown that the linear function used by the support vector machine can be re-written as m X > w x+b = b+ αi x>x(i) i=1
where x (i) is a training example and α is a vector of coefficients. Rewriting the learning algorithm this way allows us to replace x by the output of a given feature function φ(x) and the dot product with a function k(x, x(i) ) = φ(x)> φ(x (i)) called a kernel. We can then make predictions using the function X f (x) = b + α i k(x, x(i)). (5.41) i
This function is linear in the space that φ maps to, but non-linear as a function of x. The kernel trick is powerful for two reasons. First, it allows us to learn models that are non-linear as a function of x using convex optimization techniques that are guaranteed to converge efficiently. This is only possible because we consider φ fixed and only optimize α, i.e., the optimization algorithm can view the decision function as being linear in a different space. Second, the kernel function k need not be implemented in terms of explicitly applying the φ mapping and then applying 129
CHAPTER 5. MACHINE LEARNING BASICS
the dot product. The dot product in φ space might be equivalent to a nonlinear but computationally less expensive operation in x space. For example, we could design an infinite-dimensional feature mapping φ(x) over the non-negative integers. Suppose that this mapping returns a vector containing x ones followed by infinitely many zeros. Explicitly constructing this mapping, or taking the dot product between two such vectors, costs infinite time and memory. But we can write a kernel function k(x, x(i) ) = min(x, x (i) ) that is exactly equivalent to to this infinite-dimensional dot product. The most commonly used kernel is the Gaussian kernel k(u, v) = N (u − v; 0, σ 2I) (5.42) where N (x; µ, Σ) is the standard normal density. This kernel corresponds to the dot product k(u, v) = φ(x) >φ(x) on an infinite-dimensional feature space φ and also has an interpretation as a similarity function, acting like a kind of template matching. Support vector machines are not the only algorithm that can be enhanced using the kernel trick. Many linear models can be enhanced in this way. This category of algorithms is known as kernel machines or kernel methods. A major drawback to kernel machines is that the cost of learning the α coefficients is quadratic in the number of training examples. A related problem is that the cost of evaluating the decision function is linear in the number of training examples, because the i-th example contributes a term αi k(x, x (i)) to the decision function. Support vector machines are able to mitigate this by learning an α vector that contains mostly zeros. Classifying a new example then requires evaluating the kernel function only for the training examples that have non-zero α i. These training examples are known as support vectors. Another major drawback of common kernel machines (such as those using the Gaussian kernel) is more statistical and regards their difficulty in generalizing to complex variations far from the training examples, as discussed in Section 5.13. The analysis of the statistical limitations of support vector machines with general purpose kernels like the Gaussian kernels actually motivated the rebirth of neural networks through deep learning. Support vector machines and other kernel machines have often been viewed as a competitor to deep learning (though some deep networks can in fact be interpreted as support vector machines with learned kernels). The current deep learning renaissance began when deep networks were shown to outperform support vector machines on the MNIST benchmark dataset (Hinton et al., 2006). One of the main reasons for the current popularity of deep learning relative to support vector machines is the fact that the cost of training kernel machines usually scales quadratically with the number of examples in the training set. For a deep network of fixed size, the memory cost of training is constant with respect to training set size (except for the memory 130
CHAPTER 5. MACHINE LEARNING BASICS
needed to store the examples themselves) and the runtime of a single pass through the training set is linear in training set size. These asymptotic results meant that kernelized SVMs dominated while datasets were small, but deep models currently dominate now that datasets are large.
5.11
Unsupervised Learning
Whereas supervised learning is geared at a very specific task such as predicting a variable y given a variable x from the observed data, unsupervised learning tries to extract more general-purpose statistical structure from the data. In fact, several more advanced forms unsupervised learning can be thought of as the rather general task of extracting all the possible information from the observed data. Let us call the observed data z (which could correspond to a pair (x, y) or maybe to observing just x alone), corresponding to the joint observation of many individual variables z1, z 2 , z3 , then some unsupervised learning procedures basically amount to learning what it takes to be able to predict any subset of the variables given any other subset. Unsupervised learning can also be seen as the types of learning algorithms that extract potentially useful information from inputs x alone, without any prespecified label y. The objective is then to use the extracted information later, for some supervised task involving the prediction of y given x. This would be a form of semi-supervised learning, in which we combine unlabeled examples (with only examples of x) with labeled examples (with (x, y) pairs). Learning a representation of data A classic unsupervised learning task is to find the ‘best’ representation of the data. By ‘best’ we can mean different things, but generally speaking we are looking for a representation that preserves as much information about x as possible while obeying some penalty or constraint aimed at keeping the representation simpler or more accessible than x itself. There are multiple ways of defining a simpler representation, some of the most common include lower dimensional representations, sparse representations and independent representations. Low-dimensional representations attempt to compress as much information about x as possible in a smaller representation. Sparse representations generally embed the dataset into a high-dimensional representation 5 where the number of non-zero entries is small. This results in an overall structure of the representation that tends to distribute data along the axes of the representation space. Independent representations attempt to disentangle the sources of variation underlying the data distribution such that the 5
sparse representations often use over-complete representations: the representation dimension is greater than the original dimensionality of the data. 131
CHAPTER 5. MACHINE LEARNING BASICS
dimensions of the representation are statistically independent. Of course these three criteria are certainly not mutually exclusive. Lowdimensional representations often yield elements that are more-or-less mutually independent. This happens because the pressure to encode as much information about the data x as possible into a low-dimensional representation drives the elements of this representation to be more independent. Any dependency between the variables in f(x) is evidence of redundancy and implies that the representation f (x) could have captured more information about x. The notion of representation is one of the central themes of deep learning and therefore one of the central themes in this book. Chapter 16 discusses some of the qualities we would like in our learned representations, along with specific representation learning algorithms more powerful than the simple one presented next, Principal Components Analysis.
5.11.1
Principal Components Analysis
In the remainder of this section we will consider one of the most widely used unsupervised learning methods: Principle Components Analysis (PCA). PCA is an orthogonal, linear transformation of the data that projects it into a representation where the elements are uncorrelated (shown in Figure 5.7). x2
z2
x2
x1
Z =XW
z1
x1
Figure 5.7: Illustration of the data representation learned via PCA.
In section 2.12, we saw that we could learn a one-dimensional representation that best reconstructs the original data (in the sense of mean squared error) and that this representation actually corresponds to the first principal component of the data. Thus we can use PCA as a simple and effective dimensionality reduction method that preserves as much of the information in the data as possible (again, as measured by least-squares reconstruction error). In the following, we will take a look at other properties of the PCA representation. Specifically, we will study how the PCA representation can be said to decorrelate the original data 132
CHAPTER 5. MACHINE LEARNING BASICS
representation X. Let us consider the n × m-dimensional design matrix X. We will assume that the data has a mean of zero, E[x] = 0. If this is not the case, the data can easily be centered (mean removed). The unbiased sample covariance matrix associated with X is given by: 1 Var[x] = X >X (5.43) n−1 One important aspect of PCA is that it finds a representation (through linear transformation) z = W x where Var[z] is diagonal. To do this, we will make use of the singular value decomposition (SVD) of X: X = U ΣW >, where Σ is an n × m-dimensional rectangular diagonal matrix with the singular values of X on the main diagonal, U is an n× n matrix whose columns are orthonormal (i.e. unit length and orthogonal) and W is an m × m matrix also composed of orthonormal column vectors. Using the SVD of X, we can re-express the variance of X as: 1 X >X n−1 1 = (U ΣW >)> U ΣW > n−1 1 = W Σ>U >U ΣW > n−1 1 = W Σ2W >, n−1
Var[x] =
(5.44) (5.45) (5.46) (5.47)
where we use the orthonormality of U (U >U = I) and define Σ 2 as an m × mdimensional diagonal matrix with the squares of the singular values of X on the diagonal, i.e. the ith diagonal elements is given by Σ2i,i. This shows that if we take z = W x, we can ensure that the covariance of z is diagonal as required. 1 Z >Z n−1 1 = W >X >XW n−1 1 = W W > Σ2 W W > n−1 1 = Σ2 n−1
Var[z] =
(5.48) (5.49) (5.50) (5.51)
Similar to our analysis of the variance of X above, we exploit the orthonormality of W (i.e., W >W = I). Our use of SVD to solve for the PCA components of X (i.e. elements of z) reveals an interesting connection to the eigen-decomposition 133
CHAPTER 5. MACHINE LEARNING BASICS
of a matrix related to X. Specifically, the columns of W are the eigenvectors of the n × n-dimensional matrix X >X. The above analysis shows that when we project the data x to z, via the linear transformation W , the resulting representation has a diagonal covariance matrix (as given by Σ2 ) which immediately implies that the individual elements of z are mutually uncorrelated. This ability of PCA to transform data into a representation where the elements are mutually uncorrelated is a very important property of PCA. It is a simple example of a representation that attempt to disentangle the unknown factors of variation underlying the data. In the case of PCA, this disentangling takes the form of finding a rotation of the input space (mediated via the transformation W ) that aligns the principal axes of variance with the basis of the new representation space associated with z, as illustrated in Fig. 5.7. While correlation is an important category of dependency between elements of the data, we are also interested in learning representations that disentangle more complicated forms of feature dependencies. For this, we will need more than what can be done with a simple linear transformation. These issues are discussed below in Sec. 5.13 and later in detail in Chapter 16.
5.12
Weakly Supervised Learning
Weakly supervised learning is another class of learning methods that stands between supervised and unsupervised learning. It refers to a setting where the datasets consists of (x, y) pairs, as in supervised learning, but where the labels y are either unreliably present (i.e. with missing values) or noisy (i.e. where the label given is not the true label). Methods for working with weakly labeled data have recently grown in importance due to the—largely untapped—potential for using large quantities of readily available weakly labeled data in a transfer learning paradigm to help solve problems where large, clean datasets are hard to come-by. The Internet has become a major source of this kind of noisy data. For example, although we would like to train a computer vision system with labels indicating the presence and location of every object (and which pixels correspond to which object) in every image, such labeling is very human-labor intensive. Instead, we want to take advantage of images for which only the main object is identified, like the ImageNet dataset (Deng et al., 2009), or worse, of video for which some general and high-level semantic spoken caption is approximately temporally aligned with the corresponding frames of the video, like the DVS data (Descriptive Video service) which has recently been released (Torabi et al., 2015). 134
CHAPTER 5. MACHINE LEARNING BASICS
5.13
The Curse of Dimensionality and Statistical Limitations of Local Generalization
The number of variable configurations grows exponentially with the number of variables, i.e., with dimension, which brings up a statistical form of the curse of dimensionality, introduced in the next section. Many non-parametric learning algorithms, such as kernel machines with a Gaussian kernel, rely on a simple preference over functions which corresponds to an assumption of smoothness or local constancy. As argued in Section 5.13.2 that follows, this allows these algorithms to generalize near the training examples, but does not allow them to generalize in a non-trivial way far from them: the number of ups and downs that can be captured is limited by the number of training examples. This is particularly problematic with high-dimensional data, because of the curse of dimensionality. In order to reduce that difficulty, researchers have introduced the idea of dimensionality reduction and manifold learning, introduced in Section 5.13.3. This motivates the introduction of additional a priori about the task to be learned, as well as the idea of learning to better represent the data, the topic which constitutes the bulk of the rest of this book.
5.13.1
The Curse of Dimensionality
Many machine learning problems become exceedingly difficult when the number of dimensions in the data is high. This phenomenon is known as the curse of dimensionality. Of particular concern is that the number of possible distinct configurations of the variables of interest increases exponentially as the dimensionality increases.
135
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.8: As the number of relevant dimensions of the data increases (from left to right), the number of configurations of interest may grow exponentially. In the figure we first consider one-dimensional data (left), i.e., one variable for which we only care to distinguish 10 regions of interest. With enough examples falling within each of these regions (cells, in the figure), learning algorithms can easily generalize correctly, i.e., estimate the value of the target function within each region (and possibly interpolate between neighboring regions). With 2 dimensions (center), but still caring to distinguish 10 different values of each variable, we need to keep track of up to 10×10=100 regions, and we need at least that many examples to cover all those regions. With 3 dimensions (right) this grows to 10 3 = 1000 regions and at least that many examples. For d dimensions and V values to be distinguished along each axis, it looks like we need O(V d ) regions and examples. This is an instance of the curse of dimensionality. However, note that if the data distribution is concentrated on a smaller set of regions, we may actually not need to cover all the possible regions, only those where probability is non-negligible. Figure graciously provided by, and with authorization from, Nicolas Chapados.
The curse of dimensionality rears its ugly head in many places in computer science, and especially so in machine learning. One challenge posed by the curse of dimensionality is a statistical challenge. As illustrated in Figure 5.8, a statistical challenge arises because the number of possible configurations of the variables of interest is much larger than the number of training examples. To understand the issue, let us consider that the input space is organized into a grid, like in the figure. In low dimensions we can describe this space with a low number of grid cells that are mostly occupied by the data. The least we can assume about the data generating distribution is that our learner should provide the same answer to two examples falling in the same grid cell. It is a form of local constancy assumption, a notion that we develop further in the next section. When generalizing to a new data point, we can usually tell what to do simply by inspecting the training examples that lie in the same cell as the new input. For example, if estimating the probability density at some point x, we can just return the number of training examples in the same unit volume cell as x, divided by the total number of training examples. If we wish to 136
CHAPTER 5. MACHINE LEARNING BASICS
classify an example, we can return the most common class of training examples in the same cell. If we are doing regression we can average the target values observed over the examples in that cell. But what about the cells for which we have seen no example? Because in high-dimensional spaces the number of configurations is going to be huge, much larger than our number of examples, most configurations will have no training example associated with it. How could we possibly say something meaningful about these new configurations? A simple answer is to extend the local constancy assumption into a smoothness assumption, as explained next.
5.13.2
Smoothness and Local Constancy A Priori Preference
As argued previously, and especially in high-dimensional spaces (because of the curse of dimensionality introduced above), machine learning algorithms need priors, i.e., a preference over the space of solutions, in order to generalize to new configurations not seen in the training set. The specification of these preferences includes the choice of model family, as well as any regularizer or other aspects of the algorithm that influence the final outcome of training. We consider here a particular family of preferences which underlie many classical machine learning algorithms, and which we call the smoothness prior or the local constancy prior. We find that when the function to be learned has many ups and downs, and this is typically the case in high-dimensional spaces because of the curse of dimensionality (see above), then the smoothness prior is insufficient to achieve good generalization. We argue that more assumptions are needed in order to generalize better, in this setting. Deep learning algorithms typically introduce such additional assumptions. This starts with the classical multi-layer neural networks studied in the next chapter (Chapter 6), and in Chapter 16 we return to the advantages that representation learning, distributed representations and depth can bring towards generalization, even in high-dimensional spaces. Different smoothness or local constancy priors can be expressed, but what they basically say is that the target function or distribution of interest f∗ is such that f ∗(x) ≈ f ∗(x + ) (5.52) for most configurations x and small change . In other words, if we know a good answer (e.g., for an example x) then that answer is probably good in the neighborhood of x, and if we have several good answers in some neighborhood we would combine them (e.g., by some form of averaging or interpolation) to produce an answer that agrees with them as much as possible. An example of locally constant family of functions is the histogram, which breaks the input space into a number of distinguishable regions or “bins” in which any x can fall, and produces a constant output within each region. Another 137
CHAPTER 5. MACHINE LEARNING BASICS
example of piecewise constant learned function is what we obtain with k-nearest neighbors predictors, where f (x) is constant in some region R containing all the points x that have the same set of k nearest neighbors from the training set. If we are doing classification and k=1, f (x) is just the output class associated with the nearest neighbor of x in the training set. If we are doing regression, f(x) is the average of the outputs associated with the k nearest neighbors of x. Note that in both cases, for k = 1, the number of distinguishable regions of cannot be more than the number of training examples. See Murphy (2012); Bishop (2006) or other machine learning textbooks for more material on histograms and nearest-neighbor classifiers.
Figure 5.9: Illustration of interpolation and kernel-based methods, which construct a smooth function by interpolating in various ways between the training examples (circles), which act like knot points controlling the shape of the implicit regions that separate them as well as the values to output within each region. Depending on the type of kernel, one obtains a piecewise constant (histogram-like, in dotted red), a piecewise linear (dashed black) or a smoother kernel (bold blue). The underlying assumption is that the target function is as smooth or locally as constant as possible. This assumption allows to generalize locally, i.e., to extend the answer known at some point x to nearby points, and this works very well so long as, like in the figure, there are enough examples to cover most of the ups and downs of the target function.
To obtain even more smoothness, we can interpolate between neighboring training examples, as illustrated in Figure 5.9. For example, non-parametric kernel density estimation methods and kernel regression methods construct a learned function f of the form of Eq. 5.41 for classification or regression, or alternatively,
138
CHAPTER 5. MACHINE LEARNING BASICS
e.g., in the Parzen regression estimator, of the form n X
k(x, x(i) ) f (x) = b + αi P n . (j)) k(x, x j=1 i=1
If the kernel function k is discrete (e.g. 0 or 1), then this can include the above cases where f is piecewise constant and a discrete set of regions (no more than one per training example) can be distinguished. However, better results can often be obtained if k is smooth, e.g., the Gaussian kernel from Eq. 5.42. With k a local kernel (Bengio et al., 2006a; Bengio and LeCun, 2007b; Bengio, 2009)6 , we can think of each x(i) as a template and the kernel function as a similarity function that matches a template and a test example. With the Gaussian kernel, we do not have a piecewise constant function but instead a continuous and smooth function. In fact, the choice of k can be shown to correspond to a particular form of smoothness. Equivalently, we can think of many of these estimators as the result of smoothing the empirical distribution by convolving it with a function associated with the kernel, e.g., the Gaussian kernel density estimator is the empirical distribution convolved with the Gaussian density. Although in classical non-parametric estimators the α i of Eq. 5.41 are fixed (e.g. to 1/n for density estimation and to y(i) for supervised learning from examples (x(i) , y (i) )), they can be optimized, and this is the basis of more modern non-parametric kernel methods (Sch¨olkopf and Smola, 2002) such as the Support Vector Machine (Boser et al., 1992; Cortes and Vapnik, 1995) (see also Section 5.10.2). However, as illustrated in Figure 5.9, even though these smooth kernel methods generalize better, the main thing that has changed is that one can basically interpolate between the neighboring examples, in some space associated with the kernel. One can then think of the training examples as control knots which locally specify the shape of each region and the associated output. 6 i.e.,
with k(u, v) large when u = v and decreasing as they get farther apart
139
CHAPTER 5. MACHINE LEARNING BASICS
010
00 01
0
011
0
00
1
01
010
10
011
11
110
111
110
1110
1111
11 10
1
111 1110
1111
Figure 5.10: Decision tree (right) and how it cuts the input space into regions, with a constant output in each region (left). Each node of the tree (circle or square) is associated with a region (the entire space for the root node, with the empty string identifier). Internal nodes (circles) split their region in two, in this case (the most common) via an axis aligned cut. Leaf nodes (squares) are associated with an “answer”, such as the average target output for the training examples that fall in the corresponding region. Each node is displayed with a binary string identifier corresponding to its position in the tree, obtained by adding a bit to its parent (0=choose left or top, 1=choose right or bottom). Note that the result is a piecewise-constant function, and note how the number of regions (pieces) cannot be greater than the number of examples, hence it is not possible to learn a function that has more ups and downs than the number of training examples.
Another type of non-parametric learning algorithm that also breaks the input space into regions and has separate parameters for each region is the decision tree (Breiman et al., 1984) and its many variants. We give a brief account here and illustrate decision trees in Figure 5.10, but please refer as needed to Murphy (2012); Bishop (2006) or other machine learning textbooks for more material on decision trees. Each node of the decision tree is associated with a region in the input space, and internal nodes breaks that region into one sub-region for each child of the node (typically using an axis-aligned cut). Typically, a constant output f (n(x)) is returned by the decision tree predictor for any x falling in the 140
CHAPTER 5. MACHINE LEARNING BASICS
region associated with a particular leaf node n(x). Because each example only informs the region in which it falls about which output to produce, one cannot have more regions than training examples. If the target function can be well approximated by cutting the input space into N regions (with a different answer in each region), then at least N examples are needed (and a multiple of N is needed to achieve some level of statistical confidence in the predicted output). All this is also true if the tree is used for density estimation (the output is simply an estimate of the density within the region, which can be obtained by the ratio of the number of training examples in the region by the region volume) or whether a non-constant (e.g. linear) predictor is associated with each leaf (then more examples are needed within each leaf node, but the relationship between number of regions and number of examples remains linear). We examine below how this may hurt the generalization ability of decision trees and other learning algorithms that are based only on the smoothness or local constancy priors, when the input is high-dimensional, i.e., because of the curse of dimensionality. In all cases, the smoothness assumption (Eq. 5.52) allows the learner to generalize locally. Since we assume that the target function obeys f ∗ (x) ≈ f∗(x + ) most of the time for small , we can generalize the empirical distribution (or the (x, y) training pairs) to the neighborhood of the training examples. If (x (i), y(i) ) is a supervised (input,target) training example, then we expect f ∗ (x(i)) ≈ y (i), and therefore if x is a near neighbor of x(i) , we expect that f ∗(x) ≈ y (i). By considering more neighbors, we can obtain better generalization, by better executing the smoothness assumption. In general, to distinguish O(N ) regions in input space, all of these methods require O(N ) examples (and typically there are O(N ) parameters associated with the O(N ) regions). This is illustrated in Figure 5.11 in the case of a nearestneighbor or clustering scenario, where each training example can be used to define one region. Is there a way to represent a complex function that has many more regions to be distinguished than the number of training examples? Clearly, assuming only smoothness of the underlying function will not allow a learner to do that. For example, imagine that the target function is a kind of checkerboard, i.e., with a lot of variations, but a simple structure to them, and imagine that the number of training examples is substantially less than the number of black and white regions. Based on local generalization and the smoothness or local constancy prior, we could get the correct answer within a constant-colour region, but we could not correctly predict the checkerboard pattern. The only thing that an example tells us, with this prior, is that nearby points should have the same colour, and the only way to get the checkerboard right is to cover all of its cells with at least one example. The smoothness assumption and the associated non-parametric learning algo141
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.11: Illustration of how non-parametric learning algorithms that exploit only the smoothness or local constancy priors typically break up the input space into regions, with examples in those regions being used both to define the region boundaries and what the output should be within each region. The figure shows the case of clustering or 1-nearestneighbor classifiers, for which each training example (cross of a different color) defines a region or a template (here, the different regions form a Voronoi tessellation). The number of these contiguous regions cannot grow faster than the number of training examples. In the case of a decision tree, the regions are recursively obtained by axis-aligned cuts within existing regions, but for these and for kernel machines with a local kernel (such as the Gaussian kernel), the same property holds, and generalization can only be local: each training example only informs the learner about how to generalize in some neighborhood around it.
142
CHAPTER 5. MACHINE LEARNING BASICS
rithms work extremely well so long as there are enough examples to cover most of the ups and downs of the target function. This is generally true when the function to be learned is smooth enough, which is typically the case for low-dimensional data. And if it is not very smooth (we want to distinguish a huge number of regions compared to the number of examples), is there any hope to generalize well? Both of these questions are answered positively in Chapter 16. The key insight is that a very large number of regions, e.g., O(2N ), can be defined with O(N ) examples, so long as we introduce some dependencies between the regions via additional priors about the underlying data generating distribution. In this way, we can actually generalize non-locally (Bengio and Monperrus, 2005; Bengio et al., 2006b). A neural network can actually learn a checkerboard pattern. Similarly, some recurrent neural networks can learn the n-bit parity (at least for some not too large values of n). Of course we could also solve the checkerboard task by making a much stronger assumption, e.g., that the target function is periodic. However, neural networks can generalize to a much wider variety of structures, and indeed our AI tasks have structure that is much too complex to be limited to periodicity, so we want learning algorithms that embody more general-purpose assumptions. The core idea in deep learning is that we assume that the data was generated by the composition of factors or features, potentially at multiple levels in a hierarchy. These apparently mild assumptions allow an exponential gain in the relationship between the number of examples and the number of regions that can be distinguished, as discussed in Chapter 16. Priors that are based on compositionality, such as arising from learning distributed representations and from a deep composition of representations, can give an exponential advantage, which can hopefully counter the exponential curse of dimensionality. Chapter 16 discusses these questions from the angle of representation learning and the objective of disentangling the underlying factors of variation.
5.13.3
Manifold Learning and the Curse of Dimensionality
We consider here a particular type of machine learning task called manifold learning. Although they have been introduced to reduce the curse of dimensionality. We will argue that they allow one to visualize and highlight how the smoothness prior is not sufficient to generalize in high-dimensional spaces. Chapter 17 is devoted to the manifold perspective on representation learning and goes in much greater details in this topic as well as in actual manifold learning algorithms based on neural networks. A manifold is a connected region, i.e., a set of points, associated with a neighborhood around each point, which makes it locally look like a Euclidean space. The notion of neighbor implies the existence of transformations that can be ap143
CHAPTER 5. MACHINE LEARNING BASICS
plied to move on the manifold from one position to a neighboring one. Although there is a formal mathematical meaning to this term, in machine learning it tends to be used more loosely to talk about a connected set of points that can be well approximated by considering only a small number of degrees of freedom, or dimensions, embedded in a higher-dimensional space. Each dimension corresponds to a local direction of variation, i.e., moving along the manifold in some direction. The manifolds we talk about in machine learning are subsets of points, also called a submanifold, of the embedding space (which is also a manifold). Manifold learning algorithms assume that the data distribution is concentrated in a small number of dimensions, i.e., that the set of high-probability configurations can be approximated by a low-dimensional manifold. Figure 5.7 (left) illustrates a distribution that is concentrated near a linear manifold (the manifold is along a 1-dimensional straight line). Manifold learning was introduced in the case of continuous-valued data and the unsupervised learning setting, although this probability concentration idea can be generalized to both discrete data and the supervised learning setting: the key assumption remains that probability mass is highly concentrated. Is this assumption reasonable? It seems to be true for almost all of the AI tasks such as those involving images, sounds, and text. To be convinced of this we will invoke (a) the observation that probability mass is concentrated and (b) the observed objects can generally be transformed into other plausible configurations via some small changes (which indicates a notion of direction of variation while staying on the “manifold”). For (a), consider that if the assumption of probability concentration was false, then sampling uniformly at random from in the set of all configurations (e.g., uniformly in Rn ) should produce probable (data-like) configurations reasonably often. But this is not what we observe in practice. For example, generate pixel configurations for an image by independently picking the grey level (or a binary 0 vs 1) for each pixel. What kind of images do you get? You get “white noise” images, that look like the old television sets when no signal is coming in, as illustrated in Figure 5.12 (left). What is the probability that you would obtain something that looks like a natural image, with this procedure? Almost zero, because the set of probable configurations (near the manifold of natural images) occupies a very small volume out of the total set of pixel configurations. Similarly, if you generate a document by picking letters randomly, what is the probability that you will get a meaningful English-language text? Almost zero, again, because most of the long sequences of letters do not correspond to a natural language sequence: the distribution of natural language sequences occupies a very small volume in the total space of sequences of letters.
144
CHAPTER 5. MACHINE LEARNING BASICS
Figure 5.12: Sampling images uniformly at random, e.g., by randomly picking each pixel according to a uniform distribution, gives rise to white noise images such as illustrated on the left. Although there is a non-zero probability to generate something that looks like a natural image (like those on the right), that probability is exponentially tiny (exponential in the number of pixels!). This suggests that natural images are very “special”, and that they occupy a tiny volume of the space of images.
The above thought experiments, which are in agreement with the many experimental results of the manifold learning literature, e.g. (Cayton, 2005; Narayanan and Mitter, 2010; Sch¨olkopf et al., 1998; Roweis and Saul, 2000; Tenenbaum et al., 2000; Brand, 2003; Belkin and Niyogi, 2003; Donoho and Grimes, 2003; Weinberger and Saul, 2004), clearly establish that for a large class of datasets of interest in AI, the manifold hypothesis is true: the data generating distribution concentrates in a small number of dimensions, as in the cartoon of Figure 17.4, from Chapter 17. That chapter explores the relationships between representation learning and manifold learning: if the data distribution concentrates on a smaller number of dimensions, then we can think of these dimensions as natural coordinates for the data, and we can think of representation learning algorithms as ways to map the input space to a new and often lower-dimensional space which captures the leading dimensions of variation present in the data as axes or dimensions of the representation. An initial hope of early work on manifold learning (Roweis and Saul, 2000; Tenenbaum et al., 2000) was to reduce the effect of the curse of dimensionality, by first reducing the data to a lower dimensional representation (e.g. mapping 145
CHAPTER 5. MACHINE LEARNING BASICS
(x1, x 2 ) to z1 in Figure 5.7 (right)), and then applying ordinary machine learning in that transformed space. This dimensionality reduction can be achieved by learning a transformation (generally non-linear, unlike with PCA introduced in Section 5.11.1) of the data that is invertible for most training examples, i.e., that keeps the information in the input example. It is only possible to reconstruct input examples from their low-dimensional representation because they lie on a lowerdimensional manifold, of course. This is basically how auto-encoders (Chapter 15) are trained. The hope was that by non-linearly projecting the data in a new space of lower dimension, we would reduce the curse of dimensionality by only looking at relevant dimensions, i.e., a smaller set of regions of interest (cells, in Figure 5.8). This can indeed be the case, however, as discussed in Chapter 17, the manifolds can be highly curved and have a very large number of twists, requiring still a very large number of regions to be distinguished (every up and down of each corner of the manifold). And even if we were to reduce the dimensionality of an input from 10000 (e.g. 100×100 binary pixels) to 100, 2 100 is still too large to hope covering with a training set. This still rules out the use of purely local generalization (i.e., the smoothness prior only) to model such manifolds, as discussed in Chapter 17 around Figure 17.4 and 17.5. It may also be that although the effective dimensionality of the data could be small, some examples could fall outside of the main manifold and that we do not want to systematically lose that information. A sparse representation then becomes a possible way to represent data that is mostly low-dimensional, although occasionally occupying more dimensions. This can be achieved with a high-dimensional representation whose elements are 0 most of the time. We can see that the effective dimension (the number of non-zeros) then can change depending on where we are in input space, which can be useful. Sparse representations are discussed in Section 15.8. The next part of the book introduces specific deep learning algorithms that aim at discovering representations that are useful for some task, i.e., trying to extract the directions of variations that matter for the task of interest, often in a supervised setting. The last part of the book concentrates more on unsupervised representation learning algorithms, which attempt to capture all of the directions of variation that are salient in the data distribution.
146
Part II
Modern Practical Deep Networks
147
This part of the book summarizes the state of modern deep learning as it is used to solve practical applications. Deep learning has a long history and many aspirations. Several approaches have been proposed that have yet to entirely bear fruit. Several ambitious goals have yet to be realized. These less-developed branches of deep learning appear in the final part of the book. This part focuses only on those approaches that are essentially working technologies that are already used heavily in industry. Modern deep learning provides a very powerful framework for supervised learning. By adding more layers and more units within a layer, a deep network can represent functions of increasing complexity. Most tasks that consist of mapping an input vector to an output vector, and that are easy for a person to do rapidly, can be accomplished via deep learning, given sufficiently large model and dataset of labeled training examples. Other tasks, that can not be described as associating one vector to another, or that are difficult enough that to do them a person would require time to think and reflect, remain beyond the scope of deep learning for now. This part of the book describes the core parametric function approximation technology that is behind nearly all modern practical applications of deep learning. Our description includes details such as, how to efficiently model specific kinds of inputs, how to process image inputs with convolutional networks as well as how to process sequence inputs with recurrent and recursive networks. Moreover, we provide guidance for how to preprocess the data for various tasks and how to choose the values of the various settings that govern the behavior of these algorithms. These chapters are the most important for a practitioner – someone who wants to begin implementing and using deep learning algorithms to solve real-world problems today.
148
Chapter 6
Feedforward Deep Networks 6.1
From Fixed Features to Learned Features
In Chapter 5 we considered linear regression, linear classifiers and logistic regression, and introduced kernel machines, all of which involve a fixed set of features on which a linear predictor is trained. These models perform non-linear transformations of data, but the non-linear part is pre-defined. How can we learn non-linear transformations of the data that create a new feature space? How can we automate feature learning? This is what neural networks with hidden layers allow us to do. Feedforward supervised neural networks were among the first and most successful learning algorithms (Rumelhart et al., 1986e,c). They are also called deep networks, multi-layer perceptron (MLP), or simply neural networks and the vanilla architecture with a single hidden layer is illustrated in Figure 6.1. A deeper version is obtained by simply having more hidden layers. Each hidden layer corresponds to a new learned representation of the input vector, trained towards some objective, such as making it easier to produce desired answers in output. MLPs can learn powerful non-linear transformations: in fact, with enough hidden units they can represent arbitrarily complex but smooth functions (see Section 6.5). This is achieved by composing simpler but still non-linear learned transformations. By transforming the data non-linearly into a new space, a classification problem that was not linearly separable (not solvable by a linear classifier) can become separable, as illustrated in Figure 6.2.
6.1.1
Estimating Conditional Statistics
To gently move from linear predictors to non-linear ones, let us consider the squared error loss function studied in the previous chapter, where the learning 149
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
V
W
Figure 6.1: Vanilla (shallow) MLP, with one sigmoid hidden layer, computing vectorvalued hidden unit vector h = sigmoid(c + W x) with weight matrix W and offset vector c. The output vector is obtained via another learned affine transformation ˆ y = b + V h, with weight matrix V and output offset vector b. The vector of hidden unit values h provides a new set of features, i.e., a new representation, derived from the raw input x.
task is the estimation of the expected value of y given x. In the context of linear regression, the conditional expectation of y is used as the mean of a Gaussian distribution that we fit with maximum likelihood. We can generalize linear regression to regression via any function f by defining the mean squared error of f: E[||y − f(x)||2] where the expectation is over the training set during training, and over the data generating distribution to obtain generalization error. We can generalize its interpretation beyond the case where f is linear or affine, uncovering an interesting property: minimizing it yields an estimator of the conditional expectation of the output variable y given the input variable x, i.e., arg min Ep(x,y) [||y − f(x)||2] = E p(x,y) [y|x]. (6.1) f∈H
provided that our set of function H contains E p(x,y)[y | x]. (If you would like to work out the proof yourself, it is easy to do using calculus of variations, which we describe in Chapter 19.4.2). Similarly, we can generalize conditional maximum likelihood (introduced in Section 5.8.1) to other distributions than the Gaussian, as discussed below when defining the objective function for MLPs. 150
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Figure 6.2: Each layer of a trained neural network non-linearly transforms its input, distorting the space so that the task becomes easier to perform, e.g., linear classification in the new feature space, in the above figures. Top: a vanilla neural network with 2 hidden units in its single hidden layer can transform the 2-D input space (shown in the blue and pink square figure) so that the examples from the two classes (shown as the points on the red and blue curves) become linearly separable (the red and blue curves correspond to the “manifold” near which examples concentrate, while the red and pink areas correspond to the regions where the neural network classifies an input as either blue or red). Bottom: with a larger hidden layer (100 here), the MNIST digit images (with 28 × 28 = 784 pixels) can be transformed so that the classes (each shown with a different color) can be much more easily classified by the output layer (over 10 digit categories). Both figures are reproduced with permission by Chris Olah from http://colah.github.io/, where many more insightful visualizations can be found.
151
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
6.2
Formalizing and Generalizing Neural Networks
In addition to covering the basics of such networks, this chapter introduces a general formalism for gradient-based optimization of parametrized families of functions, often in the context of conditional maximum likelihood criteria (Section 6.3). MLPs bring together a number of important machine learning concepts already introduced in the previous chapters: • Define a parametrized family of functions fθ describing how the learner will behave on new examples, i.e., what output the learned function fθ (x) will produce given some input x. Training consists in choosing the parameter θ (usually represented by a vector) given some training examples (x, y) sampled from an unknown data generating distribution P (X, Y ). • Define a loss function L describing what scalar loss L(ˆ y, y) is associated with each supervised example (x, y), as a function of the learner’s output yˆ = fθ (x) and the target output y. • Define a training criterion and a regularizer. The objective of training is ideally to minimize the expected loss EX,Y [L(fθ (X), Y )] over X, Y sampled from the unknown data generating distribution P (X, Y ). However this is not possible because the expectation makes use of the true underlying P (X, Y ) but we only have access to a finite number of training examples, i.e. of pairs (X,Y ). Instead, one defines a training criterion which usually includes an empirical average of the loss over the training set, plus some additional terms (called regularizers) which enforce some preferences over the choices of θ. • Define an optimization procedure to approximately minimize the training criterion1 . The simplest such optimization procedure is a variant of gradient descent (gradient descent was introduced in Section 4.3) called stochastic gradient descent, described in Section 8.3.2. Example 6.2.1 illustrates these concepts for the case of a vanilla neural network for regression. In chapter 16, we consider generalizations of the above framework to the unsupervised and semi-supervised cases, where Y may not exist or may not always be present. An unsupervised loss can then be defined solely as a function of the input x and some function fθ (x) that one wishes to learn. 1
It is generally not possible to analytically obtain a global minimum of the training criterion, so iterative numerical optimization methods are used instead. 152
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Example 6.2.1. Vanilla (Shallow) Multi-Layer Neural Network for Regression
Based on the above definitions, we could pick the family of input-output functions to be fθ (x) = b + V sigmoid(c + W x), illustrated in Figure 6.1, where sigmoid(a) = 1/(1 + e−a ) is applied element-wise, the input is the vector x ∈ Rni , the hidden layer outputs are the elements of the vector h = sigmoid(c + W x) with nh entries, the parameters are θ = (b, c, V , W ) (with θ also viewed as the flattened vectorized version of the tuple) with b ∈ Rno a vector the same dimension as the output (no ), c ∈ Rn h of the same dimension as h (number of hidden units), V ∈ Rn o ×nh and W ∈ Rnh ×ni being weight matrices. The loss function for this classical example could be the squared error y, y) = ||ˆ y − y||2 (see Section 6.1.1 discussing how it makes ˆ L( ˆ y an esti2 mator of E[Y | x]). The regularizer could be the ordinary L weight decay P P ||ω|| 2 = ( ij W2ij + ki V ki2 ), where we define the set of weights ω as the concatenation of the elements of matrices W and V . The L2 weight decay thus penalizes the squared norm of the weights, with λ a scalar that is larger to penalize stronger weights, thus yielding smaller weights. Combining the loss function and the regularizer gives the training criterion, which is the objective function during training: n
1 X (t) J (θ) = λ||ω|| + ||y − (b + V sigmoid(c + W x(t) ))||2 . n t=1 2
where (x (t), y (t)) is the t-th training example, an (input,target) pair. Finally, the classical training procedure in this example is stochastic gradient descent, which iteratively updates θ according to (t) (t) ω ← ω − 2λω + ∇ω L(fθ (x ), y ) (t)
(t)
β ← β − ∇ βL(f θ (x ), y ),
where β = (b, c) contains the offset 2 parameters, ω = (W , V ) the weight matrices, is a learning rate and t is incremented after each training example, modulo n. Section 6.4 shows how gradients can be computed efficiently thanks to backpropagation. 153
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
6.3
Parametrizing a Learned Predictor
There are many ways to define the family of input-output functions, loss function, regularizer and optimization procedure, and the most common ones are described below, while more advanced ones are left to later chapters, in particular Chapters 10 and 15.
6.3.1
Family of Functions
A motivation for the family of functions defined by multi-layer neural networks is to compose simple transformations in order to obtain highly non-linear ones. In particular, MLPs compose affine transformations and element-wise non-linearities. As discussed in Section 6.5 below, with the appropriate choice of parameters, multi-layer neural networks can in principle approximate any smooth function, with more hidden units allowing one to achieve better approximations. A multi-layer neural network with more than one hidden layer can be defined by generalizing the above structure, e.g., as follows, where we chose to use hyperbolic tangent 3 activation functions instead of sigmoid activation functions: h k = tanh(bk + W k h k−1) where h 0 = x is the input of the neural net, hk (for k > 0) is the output of the k-th hidden layer, which has weight matrix W k and offset (or bias) vector bk . If we want the output f θ (x) to lie in some desired range, then we typically define an output non-linearity (which we did not have in the above Example 6.2.1). The non-linearity for the output layer is generally different from the tanh, depending on the type of output to be predicted and the associated loss function (see below). There are several other non-linearities besides the sigmoid and the hyperbolic tangent which have been successfully used with neural networks. In particular, we introduce some piece-wise linear units below such as the the rectified linear unit (max(0, b + w · x)) and the maxout unit (maxi (b i + W:,i · x)) which have been particularly successful in the case of deep feedforward or convolutional networks. A longer discussion of these can be found in Section 6.7. These and other non-linear neural network activation functions commonly found in the literature are summarized below. Most of them are typically combined with an affine transformation a = b + W x and applied element-wise: h = φ(a) ⇔ hi = φ(a i) = φ(bi + W i,: x). 3
(6.2)
which is linearly related to the sigmoid via tanh(x) = 2 × sigmoid(2x) − 1 and typically yields easier optimization with stochastic gradient descent (Glorot and Bengio, 2010).
154
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
• Rectifier or rectified linear unit (ReLU) or positive part: transformation of the output of the previous layer: φ(a) = max(0, a), also written φ(a) = (a)+ . • Hyperbolic tangent: φ(a) = tanh(a). • Sigmoid: φ(a) = 1/(1 + e−a). • Softmax: This is a vector-to-vector transformation φ(a) = softmax(a) = P P e ai / j ea j such that i φ i (a) = 1 and φi (a) > 0, i.e., the softmax output can be considered as a probability distribution over a finite set of outcomes. Note that it is not applied element-wise but on a whole vector of “scores”. It is mostly used as output non-linearity for predicting discrete probabilities over output categories. See definition and discussion below, around Eq. 6.4. • Radial basis function or RBF unit: this one is not applied after a general affine transformation but acts on x using form a different that corresponds 2 2 to a template matching, i.e., hi = exp −||wi − x|| /σi (or typically with all the σ i set to the same value). This is heavily used in kernel SVMs (Boser et al., 1992; Sch¨ olkopf et al., 1999) and has the advantage that such units can be easily initialized (Powell, 1987; Niranjan and Fallside, 1990) as a random (or selected) subset of the input examples, i.e., wi = x (t) for some assignment of examples t to hidden unit templates i. • Softplus: φ(a) = log(1 + ea ). This is a smooth version of the rectifier, introduced in Dugas et al. (2001) for function approximation and in Nair and Hinton (2010a) in RBMs. Glorot et al. (2011a) compared the softplus and rectifier and found better results with the latter, in spite of the very similar shape and the differentiability and non-zero derivative of the softplus everywhere, contrary to the rectifier. • Hard tanh: this is shaped similarly to the tanh and the rectifier but unlike the latter, it is bounded, φ(a) = max(−1, min(1, a)). It was introduced by Collobert (2004). • Absolute value rectification: φ(a) = |a| (may be applied on the affine dot product or on the output of a tanh unit). It is also a rectifier and has been used for object recognition from images (Jarrett et al., 2009a), where it makes sense to seek features that are invariant under a polarity reversal of the input illumination. • Maxout: this is discussed in more detail in Section 6.7. It generalizes the rectifier but introduces multiple weight vectors wi (called filters) for each hidden unit. hi = maxi (bi + wi · x). 155
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
This is not an exhaustive list but covers most of the non-linearities and unit computations seen in the deep learning and neural nets literature. Many variants are possible. As discussed in Section 6.4, the structure (also called architecture) of the family of input-output functions can be varied in many ways, which calls for a generic principle for efficiently computing gradients, described in that section. For example, a common variation is to connect layers that are not adjacent, with so-called skip connections, which are found in the visual cortex (where the word “layer” should be replaced by the word “area”). Other common variations depart from a full connectivity between adjacent layers. For example, each unit at layer k may be connected to only a subset of units at layer k−1. A particular case of such form of sparse connectivity is discussed in chapter 9 with convolutional networks. In general, the set of connections between units of the whole network only needs to form a directed acyclic graph in order to define a meaningful computation (see the flow graph formalism below, Section 6.4). When units of the network are connected to themselves through a cycle, one has to properly define what computation is to be done, and this is what is done in the case of recurrent networks, treated in Chapter 10. Another example of non-full connectivity is the deep recurrent network, Section 10.4.
6.3.2
Loss Function and Conditional Log-Likelihood
In the 80’s and 90’s the most commonly used loss function was the squared error L(fθ (x), y) = ||f θ(x) − y||2 . As discussed in Section 6.1.1, if f is unrestricted (non-parametric), minimizing the expected value of the loss function over some data-generating distribution P (x, y) yields f (x) = E[y | x = x], the true conditional expectation of y given x. This tells us what the neural network is trying to learn. Replacing the squared error by an absolute value makes the neural network try to estimate not the conditional expectation but the conditional median 4 . However, when y is a discrete label, i.e., for classification problems, other loss functions such as the Bernoulli negative log-likelihood5 have been found to be more appropriate than the squared error. In the case where y ∈ {0, 1} is binary this gives L(fθ (x), y) = −y log fθ(x) − (1 − y) log(1 − fθ (x)) (6.3) also known as cross entropy objective function. It can be shown that the optimal (non-parametric) f minimizing this criterion is f (x) = P (y = 1 | x). In other words, when training the conditional log-likelihood objective function, we are 4
Showing this is another interesting exercise. This is often called cross entropy in the literature, even though the term cross entropy, defined at Eq. 5.23, should also apply to many other losses that can be viewed as negative log-likelihood, discussed below in more detail. 5
156
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
training the neural net output to estimate conditional probabilities as well as possible in the sense of the KL divergence (see Section 3.9, Eq. 3.3). Note that in order for the above expression of the criterion to make sense, f θ(x) must be strictly between 0 and 1 (an undefined or infinite value would otherwise arise). To achieve this, it is common to use the sigmoid as non-linearity for the output layer, which matches well with the Binomial negative log-likelihood criterion 6 . As explained below (Softmax subsection), the cross entropy criterion allows gradients to pass through the output non-linearity even when the neural network produces a confidently wrong answer, unlike the squared error criterion coupled with a sigmoid or softmax non-linearity. Learning a Conditional Probability Model More generally, one can define a loss function as corresponding to a conditional log-likelihood, i.e., the negative log-likelihood (NLL) criterion LNLL (fθ (x), y) = − log P (y = y | x = x; θ). See Section 5.8.1 (and the one before) which shows that this criterion corresponds to minimizing the KL divergence between the model P of the conditional probability of y given x and the data generating distribution Q, approximated here by the finite training set, i.e., the empirical distribution of pairs (x, y). Hence, minimizing this objective, as the amount of data increases, yields an estimator of the true conditional probability of y given x. For example, if y is a continuous random variable and we assume that, given x, it has a Gaussian distribution with mean fθ (x) and variance σ2, then 1 − log P (y | x; θ) = (f θ(x) − y) 2/σ 2 + log(2πσ2 ). 2 Up to an additive and multiplicative constant (which would give the same choice of θ), minimizing this negative log-likelihood is therefore equivalent to minimizing the squared error loss. Once we understand this principle, we can readily generalize it to other distributions, as appropriate. For example, it is straightforward to generalize the univariate Gaussian to the multivariate case, and under appropriate parametrization consider the variance to be a parameter or even a parametrized function of x (for example with output units that are guaranteed to be positive, or forming a positive definite matrix, as outlined below, Section 6.3.2). 6
In reference to statistical models, this “match” between the loss function and the output non-linearity is similar to the choice of a link function in generalized linear models (McCullagh and Nelder, 1989).
157
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Similarly, for discrete variables, the Binomial negative log-likelihood criterion corresponds to the conditional log-likelihood associated with the Bernoulli distribution (also known as cross entropy) with probability p = f θ(x) of generating y = 1 given x = x (and probability 1 − p of generating y = 0): L NLL = − log P (y | x; θ) = −1y=1 log p − 1y=0 log(1 − p) = −y log fθ(x) − (1 − y) log(1 − f θ (x)). where 1y=1 is the usual binary indicator. Softmax When y is discrete and has a finite domain (say {1, . . . , N }) but is not binary, the Bernoulli distribution is extended to the multinoulli distribution (defined in Section 3.10.2). This distribution is specified by a vector of N − 1 probabilities whose sum is less or equal to 1, each element of which provides the probability pi = P (y = i | x). Equivalently, one can (more conveniently) specify a vector of N probabilities whose sum is exactly 1. The softmax non-linearity was designed for this purpose (Bridle, 1990): eai p = softmax(a) ⇐⇒ p i = P a , e j j
(6.4)
where typically a = b + W h is the vector of scores whose elements a i are associated with each category i, with larger relative scores yielding exponentially larger probabilities. The corresponding loss function is therefore LNLL (p, y) = − log py . Note how minimizing this loss will push ay up (increase the score a y associated with the correct label y) while pushing down a i for i 6 = y (decreasing the score of the other labels, in the context x). The first effect comes from the numerator of the softmax while the second effect comes from the normalizing denominator. These forces cancel on a specific example only if py = 1 and they cancel in average over examples (say sharing the same x) if pi equals the fraction of times that y = i for this value x. To see this, consider the gradient with respect to the scores a: X ∂ ∂ ∂ LNLL(p, y) = (− log py ) = (−ay + log e aj ) ∂ak ∂ak ∂ak j e ak = −1 y=k + P aj je = p k − 1y=k or
∂ LNLL(p, y) = (p − e y ) ∂a
(6.5) 158
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
where e y = [0, . . . , 0, 1, 0, . . . , 0] is the one-hot vector with a 1 at position y. Examples that share the same x share the same a, so the average gradient on a over these examples is 0 when the average of the above expression cancels out, i.e., p = E y[ey | x] where the expectation is over these examples. Thus the optimal pi for these examples is the average number of times that y = i among those examples. Over an infinite number of examples, we would obtain that the gradient is 0 when pi perfectly estimates the true P (y = i | x). What the above gradient decomposition teaches us as well is the division of the total gradient into (1) a term due to the numerator (the e y ) and dependent on the actually observed target y and (2) a term independent of y but which corresponds to the gradient of the softmax denominator. The same principles and the role of the normalization constant (or “partition function”) can be seen at play in the training of Markov Random Fields, Boltzmann machines and RBMs, in Chapter 13. Note other interesting properties of the softmax. First of all, the gradient with respect to a does not saturate, i.e., the gradient does not vanish when the output probabilities approach 0 or 1 (a very confident model), except when the model is providing the correct answer. Specifically, let us consider the case where the correct label is i, i.e. y = i. The element of the gradient associated with an erroneous label, say j 6 = i, is ∂ LNLL (p, y) = pj . ∂aj
(6.6)
So if the model correctly predicts a low probability that the y = j, i.e. that pj ≈ 0, then the gradient is also close to zero. But if the model incorrectly and confidently predicts that j is the correct class, i.e., p j ≈ 1, there will be a strong push to reduce aj . Conversely, if the model incorrectly and confidently predicts that the correct class y should have a low probability, i.e., p y ≈ 0, there will be a strong push (a gradient of about -1) to push ay up. One way to see these is to imagine doing gradient descent on the aj ’s themselves (that is what backprop is really based on): the update on a j would be proportional to minus one times the gradient on aj , so a positive gradient on a j (e.g., incorrectly confident that pj ≈ 1) pushes aj down, while a negative gradient on aj (e.g., incorrectly confident that py ≈ 0) pushes ay up. In fact note how ay is always pushed up = y) are always because py − 1 y=y = py − 1 < 0, and the other scores aj (for j 6 pushed up, because their gradient is pj > 0. There are other loss functions such as the squared error applied to softmax (or sigmoid) outputs (which was popular in the 80’s and 90’s) which have vanishing gradient when an output unit saturates (when the derivative of the non-linearity is near 0), even if the output is completely wrong (Solla et al., 1988). This may be a problem because it means that the parameters will basically not change, even though the output is wrong. 159
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
To see how the squared error interacts with the softmax output, we need to introduce a one-hot encoding of the label, y = ei = [0, . . . , 0, 1, 0, . . . , 0], i.e for = i. We will again consider that the label y = i, we have y i = 1 and yj = 0, ∀j 6 we have the output of the network to be p = softmax(a), where, as before, a is the input to the softmax function ( e.g. a = b + W h with h the output of the last hidden layer). For the squared error loss L2 (p(a), y) = ||p(a) − y|| 2 , the gradient of the loss with respect to the input vector to the softmax, a, is given by: ∂ ∂L(p(a), y) ∂p(a) L2 (p(a), y) = ∂ai ∂p(a) ∂ai X = 2(pj (a) − yj )pj (1i=j − p i).
(6.7)
j
So if the model incorrectly predicts a low probability for the correct class y = i, i.e., if py = pi ≈ 0, then the score for the correct class, ay , does not get pushed up in spite of a large error, i.e., ∂a∂y L2 (p(a), y) ≈ 0. For this reason, practitioners prefer to use the negative log-likelihood (cross entropy) criterion, with the softmax non-linearity (as well as with the sigmoid non-linearity), rather than applying the squared error criterion to these probabilities. Another property that could potentially be interesting is that the softmax output is invariant under additive changes of its input vector: softmax(a) = softmax(a + b) when b is a scalar added to all the elements of vector a. Finally, it is interesting to think of the softmax as a way to create a form of competition between the units (typically output units, but not necessarily) that participate in it: the softmax function reinforces strongest filter output ai ∗ (because the exponential increases faster for larger values) and the other units that get inhibited. This is analogous to the lateral inhibition that is believed to exist between nearby neurons in cortex, and at the extreme (when the ai ’s are large in magnitude) it becomes a form of winner-take-all (one of the outputs is nearly 1 and the others are nearly 0). A more computationally expensive form of competition is found with sparse coding, described in Section 19.3. Neural Net Outputs as Parameters of a Conditional Distribution In general, for any parametric probability distribution p(y | ω) with parameters ω, we can construct a conditional distribution p(y | x) by making ω a parametrized function of x, and learning that function: p(y | ω = fθ (x)) where f θ(x) is the output of a predictor, x is its input, and y can be thought of as a “target”. The use of the word “target” comes from the common cases of 160
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
classification and regression, where fθ (x) is really a prediction associated with random variable y, or with its expected value. However, in general ω = fθ (x) may contain parameters of the distribution of y other than its expected value. For example, it could contain its variance or covariance, in the case where y is conditionally Gaussian. In the above examples, with the squared error loss, ω is the mean of the Gaussian which captures the conditional distribution of y (which means that the variance is considered fixed, not a function of x). In the common classification case, ω contains the probabilities associated with the various events of interest. Once we view things in this way, if we apply the principle of maximum likelihood in the conditional case (Section 5.8.1), we automatically get as the natural cost function the negative log-likelihood L(x, y) = − log p(y | ω = f θ (x)). Besides the expected value of y, there could be other parameters of the conditional distribution of y that control the distribution of y, given x. For example, we may wish to learn the variance of a conditional Gaussian for y, given x, and that variance could be a function that varies with x or that is a constant with respect to x. If the variance σ 2 of y given x is not a function of x, its maximum likelihood value can be computed analytically because the maximum likelihood estimator of variance is simply the empirical mean of the squared difference between observations y and their expected value (here estimated by fθ (x)). In the scalar case, we could estimate σ as follows: n
1 X (t) σ ← (y − f θ(x(t) ))2 n 2
(6.8)
i=1
where (t) indicates the t-th training example (x(t) , y (t)). In other words, the conditional variance can simply be estimated from the mean squared error. If y is a d-vector and the conditional covariance is σ 2 times the identity, then the above formula should be modified as follows, again by setting the gradient of the log-likelihood with respect to σ to zero: n 1 X (t) σ ← ||y − fθ (x (t))||2 . nd 2
(6.9)
i=1
In the multivariate case with a diagonal covariance matrix with entries σ 2i , we obtain n 1 X (t) 2 σi ← (yi − fθ,i (x (t) ))2. (6.10) n i=1
In the multivariate case with a full covariancae matrix, we have 1 Σ← n
n i=1
X
(t) (t) (t) > (y (t) i − f θ,i(x ))(yi − fθ,i (x ))
161
(6.11)
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
If the variance Σ(x) is a function of x, there is in general no analytic solution the maximizing the likelihood, but we can compute the gradient of the objective function with respect to the parameters θ that contribute to definining the mapping from the input x to Σ(x). If Σ(x) is diagonal or scalar, only positivity must be enforced, e.g., using the softplus non-linearity: σ i (x) = softplus(gθ (x)). where g θ(x) may be a neural network that takes x as input. A positive nonlinearity may also be useful in the case where σ is a function of x, if we do not seek the maximum likelihood solution (for example we do not have immediate observed targets associated with that Gaussian distribution, because the samples from the Gaussian are used as input for further computation). Then we can make the free parameter ω defining the variance the argument of the positive nonlinearity, e.g., σi (x) = softplus(ω i ). If the covariance is full and conditional, then a parametrization must be chosen that guarantees positive-definiteness of the predicted covariance matrix. This can be achieved by writing Σ(x) = B(x)B> (x), where B is an unconstrained square matrix. One practical issue if the the matrix is full is that computing the likelihood is expensive, requiring O(d 3) computation for the determinant and inverse of Σ(x) (or equivalently, and more commonly done, its eigendecomposition or that of B(x)). Besides the Gaussian, a simple and common example is the case where y is binary (i.e. Bernoulli distributed), where it is enough to specify ω = p(y = 1 | x). In the multinoulli case (multiple discrete values), ω is generally specified by a vector of probabilities (one per possible discrete value) summing to 1, e.g., via the softmax non-linearity discussed above. Another interesting and powerful example of output distribution for neural networks is the mixture model, and in particular the Gaussian mixture model, introduced in Section 3.10.5. Neural networks that compute the parameters of a mixture model were introduced in Jacobs et al. (1991); Bishop (1994). In the case of the Gaussian mixture model with N components, p(y | x) =
N X i=1
p(c = i | x)N (y | µi (x), Σi (x)).
The neural network must have three kinds of outputs, p(c = i | x), µi (x), and Σi (x), which must satisfy different constraints: 1. Mixture components p(c = i | x): these form a multinoulli distribution over the N different components associated with latent7 variable c, and can 7
c is called latent because we do not observe it in the data: given input x and target y, it is not 100% clear which Gaussian component was responsible for y, but we can imagine that y was generated by picking one of them, and make that unobserved choice a random variable. 162
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
typically be obtained by a softmax over an N-vector, to guarantee that these outputs are positive and sum to 1. 2. Means µ i(x): these indicate the center or mean associated with the i-th Gaussian component, and are unconstrained (typically with no non-linearity at all for these output units). If y is a d-vector, then the network must output an N × d matrix containing all these N d-vectors. 3. Covariances Σ i (x): these specify the covariance matrix for each component i. For the general case of an unconditional (does not depend on x) but full covariance matrix, see Eq. 6.11 to set it by maximum likelihood. In many models the variance is both unconditional and diagonal (like assumed with Eq. 6.10) or even scalar (like assumed with Eq. 6.8 or 6.9). It has been reported that gradient-based optimization of conditional Gaussian mixtures (on the output of neural networks) can be finicky, in part because one gets divisions (by the variance) which can be numerically unstable (when some variance gets to be small for a particular example, yielding very large gradients). One solution is to clip gradients (see Section 10.7.6 and Mikolov (2012); Pascanu and Bengio (2012); Graves (2013); Pascanu et al. (2013a)), while another is to scale the gradients heuristically (Murray and Larochelle, 2014). Multiple Output Variables When y is actually a tuple formed by multiple random variables y = (y1 , y2 , . . . , yk ), then one has to choose an appropriate form for their joint distribution, conditional on x = x. The simplest and most common choice is to assume that the y i are conditionally independent, i.e., p(y 1 , y 2, . . . , y k | x) =
k Y i=1
p(yi | x).
This brings us back to the single variable case, especially since the log-likelihood now decomposes into a sum of terms log p(yi | x). If each p(y i | x) is separately parametrized (e.g. a different neural network), then we can train these neural networks independently. However, a more common and powerful choice assumes that the different variables y i share some common factors, given x, that can be represented in some hidden layer of the network (such as the top hidden layer). See Sections 6.6 and 7.12 for a deeper treatment of the notion of underlying factors of variation and multi-task training: each (x, yi) pair of random variables can be associated with a different learning task, but it might be possible to exploit what these tasks have in common. See also Figure 7.6 illustrating these concepts. 163
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
If the conditional independence assumption is considered too strong, what can we do? At this point it is useful to step back and consider everything we know about learning a joint probability distribution. Since any probability distribution p(y; ω) parametrized by parameters ω can be turned into a conditional distribution p(y | x; θ) (by making ω a function ω = f θ (x) parametrized by θ), we can go beyond the simple parametric distributions we have seen above (Gaussian, Bernoulli, multinoulli), and use more complex joint distributions. If the set of values that y can take is small enough (e.g., we have 8 binary variables y i, i.e., a joint distribution involving 2 8 = 256 possible values), then we can simply model all these joint occurences as separate values, e.g., with a softmax and multinoulli over all these configurations. However, when the set of values that y i can take cannot be easily enumerated and the joint distribution is not unimodal or factorized, we need other tools. The third part of this book is about the frontier of research in deep learning, and much of it is devoted to modeling such complex joint distributions, also called graphical models: see Chapters 13, 18, 19, 20. In particular, Section 12.5 discusses how sophisticated joint probability models with parameters ω can be coupled with neural networks that compute ω as a function of inputs x, yielding structured output models conditioned with deep learning.
6.3.3
Training Criterion and Regularizer
TODO: the next few paragraphs seem reference-heavy. are they really saying anything on their own, or is just a reminder to study these concepts in Ch. 5 sufficient? listing too many specific sections of Ch.5 can be distracting/overwhelming The loss function (often interpretable as a negative log-likelihood) tells us what we would like the learner to capture. Maximizing the conditional loglikelihood of model P over the true distribution Q, i.e., minimizing the expected loss EQ(x,y) [− log p(y | x); θ] = EQ(x,y)[L(fθ (x), y)], makes p(y | x; θ) estimate the true Q(y | x) associated with the unknown data generating distribution, within the boundaries of what the chosen family of functions allows. See the end of Section 5.8 and Section 5.8.1 for a longer discussion. In practice we cannot minimize this expectation because we do not know p(x, y) and because computing and minimizing this integral exactly would generally be intractable. Instead we are going to approximately minimize a training criterion J (θ) based on the empirical average of the loss (over set). The simplest such criterion Pn the training 1 (t) is the average training loss n t=1 L(fθ (x ), y(t) ), where the training set is a set of n examples (x (t), y(t) ). However, better results can often be achieved by crippling the learner and preventing it from simply finding the best θ that minimizes the average training loss. This means that we combine the evidence coming from the data (the training set average loss) with some a priori preference on the different values that θ or fθ can take (the regularizer). If this concept (and 164
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
In principle, one could have different priors on different parameters, e.g., it is common to treat the output weights with a separate regularization coefficient, but the more hyperparameters, the more difficult is their optimization, discussed in Chapter 11. 9 See the Deep Learning Tutorials at http://deeplearning.net/tutorial/gettingstarted. html#l1-and-l2-regularization 165
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
as discussed in Chapter 5.9. Later in this book we discuss regularizers that are data-dependent (i.e., cannot be expressed easily as a pure prior on parameters), such as the contractive penalty (Chapter 15) as well as regularization methods that are difficult to interpret simply as added terms in the training criterion, such as dropout (Section 7.11).
6.3.4
Optimization Procedure
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
learning can be found in Chapter 8. Note that black-box optimization techniques are not the only tools to improve the optimization of deep networks. Many design choices in the construction of the family of functions, loss function and regularizer can have a major impact on the difficulty of optimizing the training criterion. Furthermore, instead of using generic optimization techniques, one can design optimization procedures that are specific to the learning problem and chosen architecture of the family of functions, for example by initializing the parameters of the final optimization routine from the result of a different optimization (or a series of optimizations, each initialized from the previous one). Because of the non-convexity of the training criterion, the initial conditions can make a very important difference, and this is what is exploited in the various pre-training strategies, Sections 8.6.4 and 16.1, as well as with curriculum learning (Bengio et al., 2009), Section 8.7.
6.4
Flow Graphs and Back-Propagation
The term back-propagation is often misunderstood as meaning the whole learning algorithm for multi-layer neural networks. Actually it just means the method for computing gradients in such networks. Furthermore, it is generally understood as something very specific to multi-layer neural networks, but once its derivation is understood, it can easily be generalized to arbitrary functions (for which computing a gradient is meaningful), and we describe this generalization here, focusing on the case of interest in machine learning where the output of the function to differentiate (e.g., the loss L or the training criterion J ) is a scalar and we are interested in its derivative with respect to a set of parameters (considered to be the elements of a vector θ), or equivalently, a set of inputs 10 . The partial derivative of J with respect to θ (called the gradient) tells us whether θ should be increased or decreased in order to decrease J , and is a crucial tool in optimizing the training objective. It can be readily proven that the back-propagation algorithm has optimal computational complexity in the sense that there is no algorithm that can compute the gradient faster (in the O(·) sense, i.e., up to an additive and multiplicative constant). The basic idea of the back-propagation algorithm is that the partial derivative of the cost J with respect to parameters θ can be decomposed recursively by taking into consideration the composition of functions that relate θ to J , via intermediate quantities that mediate that influence, e.g., the activations of hidden units in a deep neural network. 10
It is useful to know which inputs contributed most to the output or error made, and the sign of the derivative is also interesting in that context.
167
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
6.4.1
Chain Rule
The basic mathematical tool for considering derivatives through compositions of functions is the chain rule, illustrated in Figure 6.3. The partial derivative ∂y ∂x measures the locally linear influence of a variable x on another one y, while we denote ∇ θJ for the gradient vector of a scalar J with respect to some vector of variables θ. If x influences y which influences z, we are interested in how a tiny change in x propagates into a tiny change in z via a tiny change in y. In our case of interest, the “output” is the cost, or objective function z = J (g(θ)), we want the gradient with respect to some parameters x = θ, and there are intermediate quantities y = g(θ) such as neural net activations. The gradient of interest can then be decomposed, according to the chain rule, into ∂g(θ) (6.12) ∇ θJ (g(θ)) = ∇ g(θ) J (g(θ)) ∂θ which works also when J , g or θ are vectors rather than scalars (in which case the corresponding partial derivatives are understood as Jacobian matrices of the appropriate dimensions). In the purely scalar case we can understand the chain rule as follows: a small change in θ will propagate into a small change in g(θ) by ∂g(θ) getting multiplied by ∂θ . Similarly, a small change in g(θ) will propagate into a small change in J(g(θ)) by getting multiplied by ∇ g(θ)J (g(θ)). Hence a small ∂g(θ)
change in θ first gets multiplied by ∂θ to obtain the change in g(θ) and this then gets multiplied by ∇g(θ) J (g(θ)) to obtain the change in J (g(θ)). Hence the ratio of the change in J (g(θ)) to the change in θ is the product of these partial derivatives.
Figure 6.3: The chain rule, illustrated in the simplest possible case, with z a scalar function of y, which is itself a scalar function of x. A small change ∆x in x gets turned ∂y into a small change ∆y in y through the partial derivative ∂x , from the first-order Taylor approximation of y(x), and similarly for z(y). Plugging the equation for ∆y into the equation for ∆z yields the chain rule.
Now, if g is a vector, we can rewrite the above as follows: ∇ θ J (g(θ)) =
i
X
∂J(g(θ)) ∂g i(θ) ∂gi (θ) ∂θ
168
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Figure 6.4: Top: The chain rule, when there are two intermediate variables y1 and y 2 between x and z, creating two paths for changes in x to propagate and yield changes in z. Bottom: more general case, with n intermediate variables y 1 to y n.
which sums over the influences of θ on J (g(θ)) through all the intermediate variables gi (θ). This is illustrated in Figure 6.4 with x = θ, y i = gi (θ), and z = J (g(θ)).
6.4.2
Back-Propagation in an MLP
Whereas example 6.2.1 illustrated the case of of an MLP with a single hidden layer let us consider in this section back-propagation for an ordinary but deep MLP, i.e., like the above vanilla MLP but with several hidden layers. For this purpose, we will recursively apply the chain rule illustrated in Figure 6.4. The algorithm proceeds by first computing the gradient of the cost J with respect to output units, and these are used to compute the gradient of J with respect to the top hidden layer activations, which directly influence the outputs. We can then 169
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Algorithm 6.1 Forward computation associated with input x for a deep neural network with ordinary affine layers composed with an arbitrary elementwise differentiable (almost everywhere) non-linearity f . There are M such layers, each mapping their vector-valued input hk to a pre-activation vector a k via a weight matrix W (k) which is then transformed via f into hk+1 . The input vector x corresponds to h0 and the predicted outputs yˆ corresponds to hM . The cost function L(y, ˆ y) depends on the output yˆ and on a target y (see Section 6.3.2 for examples of loss functions). The loss may be added to a regularizer Ω (see Section 6.3.3 and Chapter 7) to obtain the example-wise cost J . Algorithm 6.2 shows how to compute gradients of J with respect to parameters W and b. For computational efficiency on modern computers (especially GPUs), it is important to implement these equations minibatch-wise, i.e., h(k) (and similary a (k) ) should really be a matrix whose second dimension is the example index in the minibatch. Accordingly, y and ˆ y should have an additional dimension for the example index in the minibatch, while J remains a scalar, i.e., the average of the costs over all the minibatch examples. h0 = x for k = 1 . . . , M do a(k) = b (k) + W (k)h(k−1) h(k) = f (a(k) ) end for y = h(M) ˆ J = L(ˆ y, y) + λΩ
continue computing the gradients of lower level hidden units one at a time in the same way The gradients on hidden and output units can be used to compute the gradient of J with respect to the parameters (e.g. weights and biases) of each layer (i.e., that directly contribute to the output of that layer). Algorithm 6.1 describes in matrix-vector form the forward propagation computation for a classical multi-layer network with M layers, where each layer computes an affine transformation (defined by a bias vector b (k) and a weight matrix W (k) ) followed by a non-linearity f . In general, the non-linearity may be different on different layers, and this is typically the case for the output layer (see (k) Section 6.3.1). Hence each unit at layer k computes an output hi as follows: X (k) (k−1) (k) (k) ai = bi + W ij h j j
(k) hi
= f (a(k) )v
(6.13)
where we separate the affine transformation from the non-linear activation oper170
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
6.4.3
Back-Propagation in a General Flow Graph
In this section we call the intermediate quantities between inputs (parameters θ) and output (cost J) of the graph nodes u j (indexed by j) and consider the general case in which they form a directed acyclic graph that has J as its final node u N , that depends of all the other nodes uj . The back-propagation algorithm ∂J ∂J exploits the chain rule for derivatives to compute ∂u when ∂u has already been j i computed for successors ui of u j in the graph, e.g., the hidden units in the next ∂J layer downstream. This recursion can be initialized by noting that ∂u = ∂J ∂J = 1 N and at each step only requires to use the partial derivatives associated with each i arc of the graph, ∂u ∂uj , when u i is a successor of uj . 171
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
uN … u i = fi (ai )
ai
fi
uj
… u1
u2
Figure 6.5: Illustration of recursive forward computation, where at each node ui we compute a value u i = fi (ai ), with a i being the list of values from parents u j of node ui . Following Algorithm 6.3, the overall inputs to the graph are u1 . . . , uM (e.g., the parameters we may want to tune during training), and there is a single scalar output uN (e.g., the loss which we want to minimize).
Algorithm 6.3 Flow graph forward computation. Each node computes numerical value ui by applying a function fi to its argument list ai that comprises the values of previous nodes u j , j < i, with j ∈ parents(i). The input to the flow graph is the vector x, and is set into the first M nodes u 1 to uM . The output of the flow graph is read off the last (output) node u N . for i = 1 . . . , M do ui ← xi end for for i = M + 1 . . . , N do ai ← (u j )j∈parents(i) ui ← fi (ai ) end for return uN
172
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
More generally than multi-layered networks, we can think about decomposing a function J (θ) into a more complicated graph of computations. This graph is called a flow graph. Each node u i of the graph denotes a numerical quantity that is obtained by performing a computation requiring the values uj of other nodes, with j < i. The nodes satisfy a partial order which dictates in what order the computation can proceed. In practical implementations of such functions (e.g. with the criterion J (θ) or its value estimated on a minibatch), the final computation is obtained as the composition of simple functions taken from a given set (such as the set of numerical operations that the numpy library can perform on arrays of numbers).
u3
u2 u1 Figure 6.6: Illustration of indirect effect and direct effect of variable u 1 on variable u 3 in a flow graph, which means that the derivative of u3 with respect to u 1 must include the sum of two terms, one for the direct effect (derivative of u 3 with respect to its first argument) and one for the indirect effect through u2 (involving the product of the derivative of u 3 with respect to u2 times the derivative of u2 with respect to u1 ). Forward computation of ui ’s (as in Figure 6.5) is indicated with upward full arrows, while backward computation (of derivatives with respect to ui’s, as in Figure 6.7) is indicated with downward dashed arrows.
We will define the back-propagation in a general flow-graph, using the following generic notation: u i = f i (ai ), where ai is a list of arguments for the application of fi to the values uj for the parents of i in the graph: ai = (u j )j∈parents(i). This is illustrated in Figure 6.5. The overall computation of the function represented by the flow graph can thus be summarized by the forward computation algorithm, Algorithm 6.3. In addition to having some code that tells us how to compute f i (ai ) for some values in the vector ai , we also need some code that tells us how to compute i (ai ) its partial derivatives, ∂f∂a with respect to any immediate argument a ik. Let ik k = π(i, j) denote the index of uj in the list a i. Note that u j could influence ui i through multiple paths. Whereas ∂u would denote the total gradient adding up ∂u j i (ai ) all of these influences, ∂f∂a only denotes the derivative of fi with respect to its ik specific k-th argument, keeping the other arguments fixed, i.e., only considering the influence through the arc from u j to u i . In general, when manipulating
173
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
partial derivatives, one should keep clear in one’s mind (and implementation) the notational and semantic distinction between a partial derivative that includes all paths and one that includes only the immediate effect of a function’s argument on the function output, with the other arguments considered fixed. For example consider f3 (a3,1, a 3,2) = e a 3,1+a 3,2 and f2 (a 2,1) = a22,1, while u 3 = f3 (u 2 , u1 ) and u2 = f2 (u1 ), illustrated in Figure 6.6. The direct derivative of f 3 with respect ∂f 3 to its argument a 3,2 is ∂a 3,2 = e a3,1 +a3,2 while if we consider the variables u 3 and u1 to which these correspond, there are two paths from u1 to u3 , and we 3 obtain as derivative the sum of partial derivatives over these two paths, ∂u ∂u1 = 3 e u1 +u2 (1 + 2u 1 ). The results are different because ∂u ∂u1 involves not just the direct dependency of u3 on u1 but also the indirect dependency through u2 . ∂u N =1 ∂u N
… ∂u N ∂u i
∂uN ∂uj
Figure 6.7: Illustration of recursive backward computation, where we associate to each node j not just the values u j computed in the forward pass (Figure 6.5, bold upward N arrows) but also the gradient ∂u with respect to the output scalar node uN . These ∂uj gradients are recursively computed in exactly the opposite order, as described in AlgoN rithm 6.4 by using the already computed ∂u ∂u i of the children i of j (dashed downward arrows).
Armed with this understanding, we can define the back-propagation algorithm as follows, in Algorithm 6.4, which would be computed after the forward propagation (Algorithm 6.3) has been performed. Note the recursive nature of the application of the chain rule, in Algorithm 6.4: we compute the gradient on node j by re-using the already computed gradient for children nodes i, starting the N recurrence from the trivial ∂u = 1 that sets the gradient for the output node. ∂uN This is illustrated in Figure 6.7. This recursion is a form of efficient factorization of the total gradient, i.e., 174
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Algorithm 6.4 Back-propagation computation of a flow graph (full, upward arrows, Figs.6.7 and 6.5), which itself produces an additional flow graph (dashed, backward arrows). See the forward propagation in a flow-graph (Algorithm 6.3, to be performed first) and the required data structure. In addition, a quantity ∂u N ∂ui needs to be stored (and computed) at each node, for the purpose of gradient back-propagation. Below, the notation π(i, j) is the index of uj as an argument to ∂u fi . The back-propagation algorithm efficiently computes ∂uN for all i’s (traversi ing the graph backwards this time), and in particular we are interested in the derivatives of the output node u N with respect to the “inputs” u 1 . . . , u M (which could be the parameters, in a learning setup). The cost of the overall computation is proportional to the number of arcs in the graph, assuming that the partial derivative associated with each arc requires a constant time. This is of the same order as the number of computations for the forward propagation. ∂u N ∂u N
←1 for j = NP − 1 down to 1 do ∂u N ∂uN ∂fi (ai ) i:j∈parents(i) ∂ui ∂a i,π(i,j ) ∂uj ← end for M ∂u N return ∂u i i=1
it is an application of the principles of dynamic programming 11. Indeed, the derivative of the output node with respect to any node can also be written down in this intractable form: ∂u N = ∂ui
X
paths u k1 ...,uk n: k1 =i,kn =N
n Y ∂ukj ∂ukj−1 j=2
where the paths uk1 . . . , ukn go from the node k1 = i to the final node kn = N in the flow graph and
∂uk j ∂uk j−1
refers only to the immediate derivative considering
uk j−1 as the argument number π(k j , k j−1) of akj into ukj , i.e., ∂u kj ∂f kj (akj ) = . ∂u kj−1 ∂akj ,π(kj ,kj−1) Computing the sum as above would be intractable because the number of possible paths can be exponential in the depth of the graph. The back-propagation algorithm is efficient because it employs a dynamic programming strategy to reuse 11
Here we refer to “dynamic programming” in the sense of table-filling algorithms that avoid re-computing frequently used subexpressions. In the context of machine learning, “dynamic programming” can also refer to iterating Bellman’s equations. That is not the kind of dynamic programming we refer to here. 175
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
rather than re-compute partial sums associated with the gradients on intermediate nodes. Although the above was stated as if the ui’s were scalars, exactly the same procedure can be run with u i’s being tuples of numbers (more easily represented by vectors). In that case the equations remain valid, and the multiplication of scalar partial derivatives becomes the multiplication of a row vector of gradients ∂u N ∂ui with a Jacobian of partial derivatives associated with the j → i arc of the
graph, ∂a∂f i(a i) . In the case where minibatches are used during training, u i would i,π(i,j ) actually be a whole matrix (the extra dimension being for the examples in the minibatch). This would then turn the basic computation into matrix-matrix products rather than matrix-vector products, and the former can be computed much more efficiently than a sequence of matrix-vector products (e.g. with the BLAS library), especially so on modern computers and GPUs, that rely more and more on parallelization through many cores (the processing for each example in the minibatch can essentially be done in parallel).
6.4.4
Symbolic Back-propagation and Automatic Differentiation
The algorithm for generalized back-propagation (Alg. 6.4) was presented with the interpretation that actual computations take place at each step of the algorithm. This generalized form of back-propagation is just a particular way to perform automatic differentiation (Rall, 1981) in computational flow graphs defined by Algorithm 6.3. Automatic differentiation automatically obtains derivatives of a given expression and has numerous uses in machine learning (Baydin et al., 2015). As an alternative (and often as a debugging tool) derivatives could be obtained by numerical methods based on measuring the effects of small changes, called numerical differentiation (Lyness and Moler, 1967). For example, a finite difference approximation of the gradient follows from the definition of derivative as a ratio of the change in output that results in a change in input, divided by the change in input. Methods based on random perturbations also exist which randomly jiggle all the input variables (e.g. parameters) and associate these random input changes with the resulting overall change in the output variable in order to estimate the gradient (Spall, 1992). However, for obtaining a gradient (i.e., with respect to many variables, e.g., parameters of a neural network), back-propagation has two advantages over numerical differentiation: (1) it performs exact computation (up to machine precision), and (2) it is computationally much more efficient, obtaining all the required derivatives in one go. Instead, numerical differentiation methods either require to redo the forward propagation separately for each parameter (keeping the other ones fixed) or they yield stochastic estimators (from a random perturbation of all parameters) whose variances grows linearly with 176
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
the number of parameters. Automatic differentiation of a function with d inputs and m outputs can be done either by carrying derivatives forward, or carrying them backwards. The former is more efficient when d < m and the latter is more efficient when d > m. In our use case, the output is a scalar (the cost), and the backwards approach, called reverse accumulation, i.e., back-propagation, is much more efficient than the approach of propagating derivatives forward in the graph. Although Algorithm 6.4 can be seen as a form of automatic differentiation, it has another interpretation: each step symbolically specifies how gradient computation could be done, given a symbolic specification of the function to differentiate, i.e., it can be used to perform symbolic differentiation. Whereas automatic differentiation manipulates and outputs numbers, given a symbolic expression (the program specifying the function to be computed and differentiated), symbolic differentiation manipulates and outputs symbolic expressions, i.e., pieces of program, producing a symbolic expression for computing the derivatives. The popular Torch library 12 for deep learning, as well as most other open source deep learning libraries are a limited form of doing automatic differentiation restricted to the “programs” obtained by composing a predefined set of operations, each corresponding to a “module”. The set of these modules is designed such that many neural network architectures and computations can be performed by composing the building blocks represented by each of these modules. Each module is defined by two main functions, (1) one that computes the outputs y of the module given its inputs x, e.g., with an “fprop” function y = module.fprop(x), and (2), one that computes the gradient ∂J of a scalar (typically the minibatch ∂x cost J) with respect to the inputs x, given the gradient ∂J ∂y with respect to the outputs, e.g., with a “bprop” function ∂J ∂J = module.bprop . ∂x ∂y The bprop function thus implicitly knows the Jacobian of the x to y mapping, ∂y ∂x , at x, and knows how to multiply it with a given vector, which for the backpropagation computation will be the gradient on the output, ∂J , i.e., it computes ∂y ∂J ∂y > ∂J = ∂x ∂x ∂y if we take the convention that gradient vectors are column vectors. In practice, implementations work in parallel over a whole minibatch (transforming matrixvector operations into matrix-matrix operations) and may operate on objects 12
See torch.ch. 177
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
which are not vectors (maybe higher-order tensors like those involved with images or sequences of vectors). Furthermore, the bprop function does not have to explicitly compute the Jacobian matrix ∂y ∂x and perform an actual matrix multiplication: it can do that matrix multiplication implicitly, which is often more efficient. For example, if the true Jacobian is diagonal, then the actual number of computations required is much less than the size of the Jacobian matrix. To keep computations efficient and avoid the overhead of the glue required to compose modules together, neural net packages such as Torch define modules that perform coarse-grained operations such as the cross-entropy loss, a convolution, the affine operation associated with a fully-connected neural network layer, or a softmax. It means that if one wants to write differentiable code for some computation that is not covered by the existing set of modules, one has to write their own code for a new module, providing both the code for fprop and the code for bprop. This is in contrast with standard automatic differentiation systems, which know how to compute derivatives through all the operations in a general-purpose programming language such as C. Instead interpreting of Algorithm 6.4 as a recipe for backwards automatic differentation, it can be interpreted as a recipe for backwards symbolic differentation, and this is what the Theano (Bergstra et al., 2010b; Bastien et al., 2012) library 13 is doing. Like Torch, it only covers a predefined set of operations (i.e., a language that is a subset of usual programming languages), but it is a much larger and finegrained set of operations, covering most of the operations on tensors and linear algebra defined in Python’s numpy library of numerical computation. It is thus very rare that a user would need to write a new module for Theano, except if they want to provide an alternative implementation (say, more efficient or numerically stable in some cases). An immediate advantage of symbolic differentiation is that because it maps symbolic expressions to symbolic expressions, it can be applied multiple times and yield high-order derivatives. Another immediate advantage is that it can take advantage of the other tools of symbolic computation (Buchberger et al., 1983), such as simplification (to make computation faster and more memory-efficient) and transformations that make the computation more numerically stable (Bergstra et al., 2010b). These simplification operations make it still very efficient in terms of computation and memory usage even with a set of fine-grained operations such as individual tensor additions and multiplications. Theano also provides a compiler of the resulting expressions into C for CPUs and GPUs, i.e., the same high-level expression can be implemented in different ways depending of the underlying hardware. 13
See http://deeplearning.net/software/theano/.
178
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
6.4.5
Back-propagation Through Random Operations and Graphical Models
Whereas traditional neural networks perform deterministic computation, they can be extended to perform stochastic computation. In this case, we can think of the network as defining a sampling process that deterministically transforms some random values. We can then apply backpropagation as usual, with the underlying random values as inputs to the network. As an example, let us consider the operation consisting of drawing samples zfrom a Gaussian distribution with mean µ and variance σ 2 : z ∼ N (µ, σ2 ). Because an individual sample of z is not produced by a function, but rather by a sampling process whose output changes every time we query it, it may seem counterintuitive to take the derivatives of z with respect to the parameters of its distribution, µ and σ2 . However, we can rewrite the sampling process as transforming an underlying random value eta ∼ N (0, 1) to obtain a sample from the desired distribution: z = µ + ση (6.14) We are now able to backpropagate through the sampling operation, by regarding it as a deterministic operation with an extra input. Crucially, the extra input is a random variable whose distribution is not a function of any of the variables whose derivatives we want to calculate. The result tells us how an infinitesimal change in µ or σ would change the output if we could repeat the sampling operation again with the same value of η. Being able to backpropagate through this sampling operation allows us to incorporate it into a larger graph; e.g. we can compute the derivatives of some loss function J (z). Moreover, we can introduce functions that shape the distribution, e.g. µ = f(x; θ) and σ = g(x; θ) and use back-propagation through this functions to derive ∇θ J (z). The principal used in this Gaussian sampling example is true in general, i.e., given a value z sampled from distribution p(z | ω) whose parameters ω may depend on other quantities of interest, we can rewrite z ∼ p(z | ω) as z = f (ω, η) where η is a source of randomness that is independent of any of the variables that influence ω. TODO– add discussion of discrete random variables, REINFORCE. 179
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
it is true that you can express the generation process this way for any variable, but for discrete variables it can be pointless since the gradient of the thresholding operation is zero or undefined everywhere. in that case you can fall back to REINFORCE and the expected loss In neural network applications, we typically choose η to be drawn from some simple distribution, such as a unit uniform or unit Gaussian distribution, and achieve more complex distributions by allowing the deterministic portion of the network to reshape its input. This is actually how the random generators for parametric distributions are implemented in software, i.e., by performing operations on approximately independent sources of noise (such as random bits). So long as the function f in the above equation is differentiable with respect to ω, we can back-propagate through the sampling operation. The idea of propagating gradients or optimizing through stochastic operations is old (Price, 1958; Bonnet, 1964), first used for machine learning in the context of reinforcement learning (Williams, 1992), variational approximations (Opper and Archambeau, 2009), and more recently, stochastic or generative neural networks (Bengio et al., 2013a; Kingma, 2013; Kingma and Welling, 2014b,a; Rezende et al., 2014; Goodfellow et al., 2014c). Many networks, such as denoising autoencoders or networks regularized with dropout, are also naturally designed to take noise as an input without requiring any special reparameterization to make the noise independent from the model.
6.5
Universal Approximation Properties and Depth
A linear model, mapping from features to outputs via matrix multiplication, can by definition represent only linear functions. It has the advantage of being easy to train because many loss functions result in a convex optimization problem when applied to linear models. Unfortunately, we often want to learn non-linear functions. At first glance, we might presume that learning a non-linear function requires designing a specialized model family for the kind of non-linearity we want to learn. However, it turns out that feedforward networks with hidden layers provide a universal approximation framework. Specifically, the universal approximation theorem (Hornik et al., 1989; Cybenko, 1989) states that a feedforward network with a linear output layer and at least one hidden layer with any “squashing” activation function (such as the logistic sigmoid activation function) can approximate any Borel measurable function from one finite-dimensional space to another with any desired non-zero amount of error, provided that the network is given enough hidden units. The derivatives of the feedforward network can also approximate the derivatives of the function arbitrarily well (Hornik et al., 1990). The 180
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
concept of Borel measurability is beyond the scope of this book; for our purposes it suffices to say that any continuous function on a closed and bounded subset of R n is Borel measurable and therefore may be approximated by a neural network. A neural network may also approximate any function mapping from any finite dimensional discrete space to another. The universal approximation theorem means that regardless of what function we are trying to learn, we know that a large MLP will be able to represent this function. However, we are not guaranteed that the training algorithm will be able to learn that function. Even if the MLP is able to represent the function, learning can fail for two different reasons. First, the optimization algorithm used for training may not be able to find the value of the parameters that corresponds to the desired function. Second, the training algorithm might choose the wrong function due to overfitting. Recall from Chapter 5.4 that the “no free lunch” theorem shows that there is no universal machine learning algorithm. Even though feedforward networks provide a universal system for representing functions, there is no universal procedure for examining a training set and choosing the right set of functions among the family of functions our approximator can represent: there could be many functions within our family that fit well the data and we need to choose one (this is basically the overfitting scenario). Another but related problem facing our universal approximation scheme is the size of the model needed to represent a given function. The universal approximation theorem says that there exists a network large enough to achieve any degree of accuracy we desire, but it does not say how large this network will be. Barron (1993) provides some bounds on the size of a single-layer network needed to approximate a broad class of functions. Unfortunately, in the worse case, an exponential number of hidden units (to basically record every input configuration that needs to be distinguished) may be required. This is easiest to see in the binary case: the number of possible binary functions on vectors v ∈ {0, 1} n is n 22 and selecting one such function requires 2 n bits, which will in general require O(2n ) degrees of freedom. In summary, a feedforward network with a single layer is sufficient to represent any function, but it may be infeasibly large and may fail to learn and generalize correctly. Both of these failure modes suggest that we may want to use deeper models. First, we may want to choose a model with more than one hidden layer in order to avoid needing to make the model infeasibly large. There exist families of functions which can be approximated efficiently by an architecture with depth greater than some value d, but require a much larger model if depth is restricted to be less than or equal to d. In many cases, the number of hidden units required by the shallow model is exponential in n. Such results have been proven for logic 181
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
gates (H˚ astad, 1986), linear threshold units with non-negative weights (H˚ astad and Goldmann, 1991), polynomials (Delalleau and Bengio, 2011) organized as deep sum-product networks (Poon and Domingos, 2011), and more recently, for deep networks of rectifier units (Pascanu et al., 2013b). Of course, there is no guarantee that the kinds of functions we want to learn in applications of machine learning (and in particular for AI) share such a property. We may also want to choose a deep model for statistical reasons. Any time we choose a specific machine learning algorithm, we are implicitly stating some set of prior beliefs we have about what kind of function the algorithm should learn. Choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. This can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. Alternately, we can interpret the use of a deep architecture as expressing a belief that the function we want to learn is a computer program consisting of multiple steps, where each step makes use of the previous step’s output. These intermediate outputs are not necessarily factors of variation, but can instead be analogous to counters or pointers that the network uses to organize its internal processing. Empirically, greater depth does seem to result in better generalization for a wide variety of tasks (Bengio et al., 2007b; Erhan et al., 2009; Bengio, 2009; Mesnil et al., 2011; Goodfellow et al., 2011; Ciresan et al., 2012; Krizhevsky et al., 2012b; Sermanet et al., 2013; Farabet et al., 2013a; Couprie et al., 2013; Kahou et al., 2013; Goodfellow et al., 2014d; Szegedy et al., 2014a). See Fig. 6.8 for an example of some of these empirical results. This suggests that that using deep architectures does indeed express a useful prior over the space of functions the model learn. This is related to our desire to choose a deep model for statistical reasons. Any time we choose a specific machine learning algorithm, we are implicitly stating some set of prior beliefs we have about what kind of function the algorithm should learn. Choosing a deep model encodes a very general belief that the function we want to learn should involve composition of several simpler functions. This can be interpreted from a representation learning point of view as saying that we believe the learning problem consists of discovering a set of underlying factors of variation that can in turn be described in terms of other, simpler underlying factors of variation. Alternately, we can interpret the use of a deep architecture as expressing a belief that the function we want to learn is a computer program consisting of multiple steps, where each step makes use of the previous step’s output. These intermediate outputs are not necessarily factors of variation, but can instead be analogous to counters or pointers that the network uses to organize 182
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
Figure 6.8: Empirical results showing that deeper networks generalize better when used to transcribe multi-digit numbers from photographs of addresses. Reproduced with permission from Goodfellow et al. (2014d). Left) The test set accuracy consistently increases with increasing depth. Right) This effect cannot be explained simply by the model being larger; one can also increase the model size by increasing the width of each layer. The test accuracy cannot be increased nearly as well by increasing the width, only by increasing the depth. This suggests that using a deep model expresses a useful preference over the space of functions the model can learn. Specifically, it expresses a belief that the function should consist of many simpler functions composed together. This could result either in learning a representation that is composed in turn of simpler representations (e.g., corners defined in terms of edges) or in learning a program with sequentially dependent steps (e.g., first locate a set of objects, then segment them from each other, then recognize them).
183
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
its internal processing. Empirically, greater depth does seem to result in better generalization for a wide variety of tasks (Bengio et al., 2007a; Erhan et al., 2009; Bengio, 2009; Mesnil et al., 2011; Goodfellow et al., 2011; Ciresan et al., 2012; Krizhevsky et al., 2012b; Sermanet et al., 2013; Farabet et al., 2013a; Couprie et al., 2013; Kahou et al., 2013; Goodfellow et al., 2014d; Szegedy et al., 2014a). See Fig. 6.8 for an example of some of these empirical results. This suggests that that using deep architectures does indeed express a useful prior over the space of functions the model learn.
6.6
Feature / Representation Learning
Let us consider again the single layer networks such as the perceptron, linear regression and logistic regression: such linear models are appealing because training them involves a convex optimization problem14 , i.e., an optimization problem with some convergence guarantees towards a global optimum, irrespective of initial conditions. Simple and well-understood optimization algorithms are available in this case. However, this limits the representational capacity too much: many tasks, for a given choice of input representation x (the raw input features), cannot be solved by using only a linear predictor. What are our options to avoid that limitation? 1. One option is to use a kernel machine (Williams and Rasmussen, 1996; Sch¨ olkopf et al., 1999), i.e., to consider a fixed mapping from x to φ(x), where φ(x) is of much higher dimension. In this case, fθ (x) = b + w · φ(x) can be linear in the parameters (and in φ(x)) and optimization remains convex (or even analytic). By exploiting the kernel trick, we can computationally handle a high-dimensional φ(x) (or even an infinite-dimensional one) so long as the kernel k(u, v) = φ(u) · φ(v) (where · is the appropriate dot product for the space of φ(·)) can be computed efficiently. If φ(x) is of high enough dimension, we can always have enough capacity to fit the training set, but generalization is not at all guaranteed: it will depend on the appropriateness of the choice of φ as a feature space for our task. Kernel machines theory clearly identifies the choice of φ to the choice of a prior. This leads to kernel engineering, which is equivalent to feature engineering, discussed next. The other type of kernel (that is very commonly used) embodies a very broad prior, e.g., the Gaussian (or such as smoothness, 2 RBF) kernel k(u, v) = exp −||u − v||/σ . Unfortunately, this prior may be insufficient, i.e., too broad and sensitive to the curse of dimensionality, as introduced in Section 5.13.1 and developed in more detail in Chapter 16. 14 or
even one for which an analytic solution can be computed, with linear regression or the case of some Gaussian process regression models 184
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
2. Another option is to manually engineer the representation or features φ(x). Most industrial applications of machine learning rely on hand-crafted features and most of the research and development effort (as well as a very large fraction of the scientific literature in machine learning and its applications) goes into designing new features that are most appropriate to the task at hand. Clearly, faced with a problem to solve and some prior knowledge in the form of representations that are believed to be relevant, the prior knowledge can be very useful. This approach is therefore common in practice, but is not completely satisfying because it involves a very task-specific engineering work and a laborious never-ending effort to improve systems by designing better features. If there were some more general feature learning approaches that could be applied to a large set of related tasks (such as those involved in AI), we would certainly like to take advantage of them. Since humans seem to be able to learn a lot of new tasks (for which they were not programmed by evolution), it seems that such broad priors do exist. This whole question is discussed in a lot more detail in Bengio and LeCun (2007a), and motivates the third option. 3. The third option is to learn the features, or learn the representation. In a sense, it allows one to interpolate between the almost agnostic approach of a kernel machine with a general-purpose smoothness kernel (such as RBF SVMs and other non-parametric statistical models) and full designerprovided knowledge in the form of a fixed representation that is perfectly tailored to the task. This is equivalent to the idea of learning the kernel, except that whereas most kernel learning methods only allow very few degrees of freedom in the learned kernel, representation learning methods such as those discussed in this book (including multi-layer neural networks) allow the feature function φ(·) to be very rich (with a number of parameters that can be in the millions or more, depending on the amount of data available). This is equivalent to learning the hidden layers, in the case of a multi-layer neural network. Besides smoothness (which comes for example from regularizers such as weight decay), other priors can be incorporated in this feature learning. The most celebrated of these priors is depth, discussed above (Section 6.5). Other priors are discussed in Chapter 16. This whole discussion is clearly not specific to neural networks and supervised learning, and is one of the central motivations for this book.
185
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
6.7
Piecewise Linear Hidden Units
Most of the recent improvement in the performance of deep neural networks can be attributed to increases in computational power and the size of datasets. The machine learning algorithms involved in recent state-of-the-art systems have mostly existed since the 1980s, with a few recent conceptual advances contributing significantly to increased performance. One of the main algorithmic improvements that has had a significant impact is the use of piecewise linear units, such as absolute value rectifiers and rectified linear units. Such units consist of two linear pieces and their behavior is driven by a single weight vector. Jarrett et al. (2009b) observed that “using a rectifying non-linearity is the single most important factor in improving the performance of a recognition system” among several different factors of neural network architecture design. For small datasets, Jarrett et al. (2009b) observed that rectifying non-linearities is even more important than learning the weights of the hidden layers. Random weights are sufficient to propagate useful information through a rectified linear network, allowing the classifier layer at the top to learn how to map different feature vectors to class identities. When more data is available, learning becomes relevant because it can extract more knowledge from it, and learning typically beats fixed or random settings of parameters. Glorot et al. (2011b) showed that learning is far easier in deep rectified linear networks than in deep networks that have curvature or two-sided saturation in their activation functions. Because the behavior of the unit is linear over half of its domain, it is easy for an optimization algorithm to tell how to improve the behavior of a unit, even when the unit’s activations are far from optimal. Just as piecewise linear networks are good at propagating information forward, back-propagation in such a network is also piecewise linear and propagates information about the error derivatives to all of the gradients in the network. Each piecewise linear function can be decomposed into different regions corresponding to different linear pieces. When we change a parameter of the network, the resulting change in the network’s activity is linear until the point that it causes some unit to go from one linear piece to another. Traditional units such as sigmoids are more prone to discarding information due to saturation both in forward propagation and in back-propagation, and the response of such a network to a change in a single parameter may be highly nonlinear even in a small neighborhood. Glorot et al. (2011b) motivate rectified linear units from biological considerations. The half-rectifying non-linearity was intended to capture these properties of biological neurons: 1) For some inputs, biological neurons are completely inactive. 2) For some inputs, a biological neuron’s output is proportional to its input. 3) Most of the time, biological neurons operate in the regime where they 186
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
are inactive (e.g., they should have sparse activations). One drawback to rectified linear units is that they cannot learn via gradientbased methods on examples for which their activation is zero. This problem can be mitigated by initializing the biases to a small positive number, but it is still possible for a rectified linear unit to learn to de-activate and then never be activated again. Goodfellow et al. (2013a) introduced maxout units and showed that maxout units can successfully learn in conditions where rectified linear units become stuck. Maxout units are also piecewise linear, but unlike rectified linear units, each piece of the linear function has its own weight vector, so whichever piece is active can always learn. Due to the greater number of weight vectors, maxout units typically need extra regularization such as dropout, though they can work satisfactorily if the training set is large and the number of pieces per unit is kept low (Cai et al., 2013). Maxout units have a few other benefits. In some cases, one can gain some statistical and computational advantages by requiring fewer parameters. Specifically, if the features captured by n different linear filters can be summarized without losing information by taking the max over each group of k features, then the next layer can get by with k times fewer weights. Because each unit is driven by multiple filters, maxout units have some redundancy that helps them to resist forgetting how to perform tasks that they were trained on in the past. Neural networks trained with stochastic gradient descent are generally believed to suffer from a phenomenon called catastrophic forgetting but maxout units tend to exhibit only mild forgetting (Goodfellow et al., 2014a). Maxout units can also be seen as learning the activation function itself rather than just the relationship between units. With large enough k, a maxout unit can learn to approximate any convex function with arbitrary fidelity. In particular, maxout with two pieces can learn to implement the rectified linear activation function or the absolute value rectification function. This same general principle of using linear behavior to obtain easier optimization also applies in other contexts besides deep linear networks. Recurrent networks can learn from sequences and produce a sequence of states and outputs. When training them, one needs to propagate information through several time steps, which is much easier when some linear computations (with some directional derivatives being of magnitude near 1) are involved. One of the best-performing recurrent network architectures, the LSTM, propagates information through time via summation–a particular straightforward kind of such linear activation. This is discussed further in Section 10.7.4. In addition to helping to propagate information and making optimization easier, piecewise linear units also have some nice properties that can make them easier to regularize. This is discussed further in Section 7.11. Sigmoidal non-linearities still perform well in some contexts and are required 187
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
when a hidden unit must compute a number guaranteed to be in a bounded interval (like in the (0,1) interval), but piecewise linear units are now by far the most popular kind of hidden units.
6.8
Historical Notes
Section 1.2 already gave an overview of the history of neural networks and deep learning. Here we focus on historical notes regarding back-propagation and the connectionist ideas that are still at the heart of today’s research in deep learning. The chain rule was invented in the 17th century (Leibniz, 1676; L’Hˆopital, 1696) and gradient descent in the 19th centry (Cauchy, 1847b). Efficient applications of the chain rule which exploit the dynamic programming structure described in this chapter are found already in the 1960’s and 1970’s, mostly for control applications (Kelley, 1960; Bryson and Denham, 1961; Dreyfus, 1962; Bryson and Ho, 1969; Dreyfus, 1973) but also for sensitivity analysis (Linnainmaa, 1976). Bringing these ideas to the optimization of weights of artificial neural networks with continuous-valued outputs was introduced by Werbos (1981) and rediscovered independently in different ways as well as actually simulated successfully by LeCun (1985); Parker (1985); Rumelhart et al. (1986a). In particular, Rumelhart et al. (1986a) and a corresponding chapter (Rumelhart et al., 1986b) in the PDP book (Rumelhart et al., 1986d) greatly contributed to popularize the idea of back-propagation and initiated a very active period of research in multi-layer neural networks. However, the ideas put forward by the authors of that book and in particular by Rumelhart and Hinton go much beyond back-propagation. They include crucial ideas about the possible computational implementation of several central aspects of cognition and learning, which came under the name of “connectionism” because of the importance given the connections between neurons as the locus of learning and memory. In particular, these ideas include the notion of distributed representation, introduced in Chapter 1 and developed a lot more in part III of this book, with Chapter 16, which is at the heart of the generalization ability of neural networks. As discussed with the historical survey in Section 1.2, the boom of AI and machine learning research which followed on the connectionist ideas reached a peak in the early 1990’s, as far as neural networks are concerned, while other machine learning techniques become more popular in the late 1990’s and remained so for the first decade of this century. Neural networks research in the AI and machine learning community almost vanished then, only to be reborn ten years later (starting in 2006) with a novel focus on the depth of representation and the current wave of research on deep learning. In addition to back-propagation and distributed representations, the connectionists brought the idea of iterative inference (they used different words), viewing neural computation 188
CHAPTER 6. FEEDFORWARD DEEP NETWORKS
in the brain as a way to look for a configuration of neurons that best satisfy all the relevant pieces of knowledge implicitly captured in the weights of the neural network. This view turns out to be central in the topics covered in part III of this book regarding probabilistic models and inference.
189
Chapter 7
Regularization TODO - generalize our definition of Regularization A central problem in machine learning is how to make an algorithm that will perform well not just on the training data, but also on new inputs. The main strategy for achieving good generalization is known as regularization. Regularization is any component of the model, training process or prediction procedure which is included to account for limitations of the training data, including its finiteness. There are many regularization strategies. Some put extra constraints on a machine learning model, such as adding restrictions on the parameter values. Some add extra terms in the cost function that one might consider a soft constraint on the parameter values. If chosen carefully, these extra constraints and penalties can lead to improved performance on the test set, either by encoding prior knowledge into the model, or by forcing the optimization process into a simpler model class that promotes generalization. Other forms of regularization, known as ensemble methods, combine multiple hypotheses that explain the training data. Sometimes regularization also helps to make an underdetermined problem determined. This chapter builds on the concepts of generalization, overfitting, underfitting, bias and variance introduced in Chapter 5. If you are not already familiar with these notions, please refer to that chapter before continuing with the more advance material presented here. Regularizers work by trading increased bias for reduced variance. An effective regularizer is one that makes a profitable trade, that is it reduces variance significantly while not overly increasing the bias. When we discussed generalization and overfitting in Chapter 5, we focused on three situations, where the model family being trained either (1) excluded the true data generating process— corresponding to underfitting and inducing bias, or (2) matched to the true data generating process—the “just right” model space, or (3) includes the generating 190
CHAPTER 7. REGULARIZATION
process but also many other possible generating processes—the regime where variance dominates the estimation error (e.g. as measured by the MSE—see Section. 5.7). Note that, in practice, an overly complex model family does not necessarily include (or even come close to) the target function or the true data generating process. We almost never have access to the true data generating process so we can never know if the model family being estimated includes the generating process or not. But since, in deep learning, we are often trying to work with data such as images, audio sequences and text, we can probably safely assume that our model family does not include the data generating process. We can assume that—to some extent – we are always trying to fit a square peg (the data generating process) into a round hole (our model family) and using the data to do that as best we can. What this means is that controlling the complexity of the model is not going to be a simple question of finding the model of the right size, i.e. the right number of parameters. Instead, we might find—and indeed in practical deep learning scenarios, we almost always do find – that the best fitting model (in the sense of minimizing generalization error) is one that possesses a large number of parameters that are not entirely free to span their domain. As we will see there are a great many forms of regularization available to the deep learning practitioner. In fact, developing more effective regularizers has been one of the major research efforts in the field. Most machine learning tasks can be viewed in terms of learning to represent a function fˆ(x) parametrized by a vector of parameters θ. The data consists of inputs x(i) and (for some tasks) targets y (i) for i ∈ {1, . . . , n}. In the case of classification, each y (i) is an integer class label in {1, . . . , k}. For regression tasks, each y(i) is a real number. In the case of a density estimation task, there are no targets. We may group these examples into a design matrix X and a vector of targets y. In deep learning, we are mainly interested in the case where fˆ(x) has a large number of parameters and as a result possesses a high capacity to fit relatively complicated functions. This means that deep learning algorithms either require very large datasets so that the data can fully specify such complicated models, or they require careful regularization. In practice, most models and tasks exist on a spectrum between these two extremes.
7.1
Regularization from a Bayesian Perspective
The Bayesian perspective on statistical inference offers a useful framework in which to consider many common methods of regularization. As we discussed in 191
CHAPTER 7. REGULARIZATION
Sec. 5.9, Bayesian estimation theory takes a fundamentally different approach to model estimation than the frequentist view by considering that the model parameters themselves are uncertain and therefore should be considered random variables. There are a number of immediate consequences of assuming a Bayesian world view. The first is that if we are using probability distributions to assess uncertainty in the model parameters then we should be able to express our uncertainty about the model parameters before we see any data. This is the role of the prior distribution. The second consequence is that, when using the model to make predictions about outcomes, one should ideally integrate over the uncertainty over the parameter values. There is a deep connection between the Bayesian perspective on estimation and the process of regularization. This is not surprising since at the root both are concerned with making predictions relative to the true data generating distribution while taking into account the finiteness of the data. What this means is that both are open to combining information sources. that is, both are interested in combining the information that can be extracted from the training data with other, or “prior” sources of information. As we will see, many forms of regularization can be given a Bayesian interpretation. If we consider a dataset {x(1) , . . . , x(m) }, we recover the posterior distribution on the model parameter θ by combining the data likelihood p(x(1) , . . . , x(m) | θ) with the prior. X log p(θ | x(1) , . . . , x(m) ) ∝ log p(θ) + log p(x(i) | θ) (7.1) i
In the context of maximum likelihood learning, the introduction of the prior distribution plays the some role as a regularizer in that it can be seen as a term added to the objective function that is added in hopes of achieving better generalization and despite of its detrimental effect on the likelihood of the training data (the optimum of which would be achieved by considering only the last term above). In the following section, we will detail how the addition of a prior is equivalent to certain regularization strategies. However we must be a bit careful in establishing the relationship between the prior and a regularizer. Regularizers are more general than priors. Priors are distributions and as such are subject to constraints such as they must always be positive and must sum to one over their domain. Regularizers have no such explicit constraints. Another problem in interpreting all regularizers as priors is that the equivalence implies the overly restrictive constraint that all unregularized objective functions be interpretable as log-likelihood functions. Nevertheless, it remains true that many of the most popular forms of regularization can be equated to a Bayesian prior. 192
CHAPTER 7. REGULARIZATION
7.2
Classical Regularization: Parameter Norm Penalty
Regularization has been used for decades prior to the advent of deep learning. Statistical and machine learning models traditionally represented simpler functions. Because the functions themselves had less capacity, the regularization did not need to be as sophisticated. We use the term classical regularization to refer to the techniques used in the general machine learning and statistics literature. Most classical regularization approaches are based on limiting the capacity of models, such as neural networks, linear regression, or logistic regression, by adding a parameter norm penalty Ω(θ) to the loss function J . We denote the regularized loss function by J˜: ˜ J (θ; X, y) = J (θ; X, y) + αΩ(θ)
(7.2)
where α is a hyperparameter that weighs the relative contribution of the norm penalty term, Ω, relative to the standard loss function J (x; θ). The hyperparameter α should be a non-negative real number, with α = 0 corresponding to no regularization, and larger values of α corresponding to more regularization. When our training algorithm minimizes the regularized loss function J˜ it will decrease both the original loss J on the training data and some measure of the size of the parameters θ (or some subset of the parameters). Different choices for the parameter norm Ω can result in different solutions being preferred. In this section, we discuss the effects of the various norms when used as penalties on the model parameters. Before delving into the regularization behavior of different norms, we note that for neural networks, we typically choose to use a parameter norm penalty Omega that only penalizes the interaction weights, i.e we leave the offsets unregularized. The offsets typically require less data to fit accurately than the weights. Each weight specifies how two variables interact, and requires observing both variables in a variety of conditions to fit well. Each offset controls only a single variable. This means that we do not induce too much variance by leaving the offsets unregularized. Also, regularizing the offsets can introduce a significant amount of underfitting.
7.2.1
L2 Parameter Regularization
One of the simplest and most common kind of classical regularization is the L2 parameter norm penalty.1 , Ω(θ) = 12 kθk 22. This form of regularization is also 1
More generally, we could consider regularizing the parameters to a parameter value w(o) that is perhaps not zero. In that case the L 2 penalty term would be Ω(θ) = 12 ||θ − θ(o)||22 = (o) 2 1 P i (θ i − θ i ) . Since it is far more common to consider regularizing the model parameters to 2 zero, we will focus on this special case in our exposition. 193
CHAPTER 7. REGULARIZATION
known as ridge regression. It is readily applicable to neural networks, where it is known as weight decay. In the context of neural networks, the penalty is equal to the sum of the squared L2 of all of the weight vectors. Typically, we use a different coefficient α for the weights at each layer of the network. This coefficient should be tuned using a validation set. We can gain some insight into the behavior of weight decay regularization by considering the gradient of the regularized loss function. To simplify the presentation, we assume no offset term, so θ is just w. Such a model has the following gradient of the loss: ∇w J˜(w; X, y) = αw + ∇w J (w; X, y)
(7.3)
We will further simplify the analysis by considering a quadratic approximation to the loss function in the neighborhood of the empirically optimal value of the weights w∗ . (If the loss is truly quadratic, as in the case of fitting a linear regression model with mean squared error, then the approximation is perfect). 1 Jˆ(θ) = J (w ∗) + (w − w∗ )> H (w − w∗ ) 2
(7.4)
where H is the Hessian matrix of J with respect to w evaluated at w ∗. There is no first order term in this quadratic approximation, because w ∗ is defined to be a minimum, where the gradient vanishes. Likewise, because w∗ is a minimum, we can conclude that H is positive semi-definite. ∇ wJˆ(w) = H (w − w∗).
(7.5)
If we replace the exact gradient in equation 7.3 with the approximate gradient in equation 7.5, we can write an equation for the location of the minimum of the regularized loss function: αw + H(w − w∗ ) = 0
(7.6)
(H + αI )w = H w ∗
(7.7)
w ˜ = (H + αI )−1 H w ∗
(7.8)
The presence of the regularization term moves the optimum from w ∗ to w. ˜ ∗ As α approaches 0, w˜ approaches w . But what happens as α grows? Because H is real and symmetric, we can decompose it into a diagonal matrix Λ and an ortho-normal basis of eigenvectors, Q, such that H = QΛQ> . Aplying the
194
CHAPTER 7. REGULARIZATION
w2 w∗ w ˜
w1
Figure 7.1: An illustration of the effect of L2 (or weight decay) regularization on the value of the optimal w. The solid elipses represent contours of equal value of the unregularized objective. The dotted circles represent contours of equal value of the L2 regularizer. At the point w, ˜ these competing objectives reach an equilibrium.
decomposition to equation 7.8, we obtain: w = (QΛQ > + αI)−1 QΛQ>w ∗ h i −1 = Q(Λ + αI )Q> QΛQ> w∗ = Q(Λ + αI )−1ΛQ > w∗ ,
Q> w = (Λ + αI )−1 ΛQ>w ∗.
(7.9)
If we interpret the Q> w as rotating our parameters w into the basis as defined by the eigenvectors Q of H , then we see that the effect of weight decay is to rescale the coefficients of eigenvectors. Specifically the ith component is rescaled λi by a factor of λi+α . (You may wish to review how this kind of scaling works, first explained in Fig. 2.3). Along the directions where the eigenvalues of H are relatively large, for example, where λ i α, the effect of regularization is relatively small. However, components with λ i α will be shrunk to have nearly zero magnitude. This effect is illustrated in Fig. 7.1 Only directions along which the parameters contribute significantly to reducing the loss are preserved relatively intact. In directions that do not contribute to reducing the loss, a small eigenvalue of the Hessian tell us that movement in this direction will not significantly increase the gradient. Components of the weight vector corresponding to such unimportant directions are decayed away through the use of the regularization throughout training. This effect of suppressing contributions to the parameter vector along these principle directions of the Hessian H is captured in the concept of the effective number of parameters, defined to be
195
CHAPTER 7. REGULARIZATION
γ=
X i
λi . λi + α
(7.10)
As α is increased, the effective number of parameters decreases. Another way to gain some intuition for the effect of L2 regularization is to consider its effect on linear regression. The unregularized objective function for linear regression is the sum of squared errors: (Xw − y)> (Xw − y). When we add L2 regularization, the objective function changes to (Xw − y)> (Xw − y) +
1 αw> w. 2
This changes the normal equations for the solution from w = (X >X)−1 X> y to w = (X> X + αI) −1X >y. We can see L 2 regularization causes the learning algorithm to “perceive” the input X as having higher variance, which makes it shrink the weights on features whose covariance with the output target is low compared to this added variance. TODO–make sure the chapter includes maybe a table showing relationships between early stopping, priors, constraints, penalties, and adding noise? e.g. look up L1 penalty and it tells you what prior it corresponds to scratchwork thinking about how to do it: L2 penalty L2 constraint add noise early stopping Gaussian prior L1 penalty L1 constraint Laplace prior Max-norm penalty
7.2.2
L1 Regularization
While L2 weight decay is the most common form of weight decay, there are other ways to penalize the size of the model parameters. Another option is to use L1 regularization. Formally, L 1 regularization on the model parameter w is defined as: X Ω(θ) = ||w||1 = |wi|. (7.11) i
196
CHAPTER 7. REGULARIZATION
That is, as the sum of absolute values of the individual parameters. 2 We will now consider the effect of L 1 regularization on the simple linear model, with no offset term, that we considered in our analysis of L2 regularization. In particular, we are interested in delineating the differences between L1 and L 2 forms of regularization. Thus, if we consider the gradient (actually the sub-gradient) on the regularized objective function J˜(w; X, y), we have: ∇w J˜(w; X, y) = βsign(w) + ∇w J (X, y; w)
(7.12)
where sign(w) is simply sign of w applied element-wise. By inspecting Eqn. 7.12, we can see immediately that the effect of L1 regularization is quite different from that of L2 regularization. Specifically, we can see that the regularization contribution to the gradient no longer scales linearly with w, instead it is a constant factor with a sign equal to sign(w). One consequence of this form of the gradient is that we will not necessarily see clean solutions to quadratic forms of ∇w J (X, y; w) as we did for L2 regularization. Instead, the solutions are going to be much more aligned to the basis space in which the problem is embedded. For the sake of comparison with L2 regularization, we will again consider a simplified setting of a quadratic approximation to the loss function in the neighborhood of the empirical optimum w∗ . (Once again, if the loss is truly quadratic, as in the case of fitting a linear regression model with mean squared error, then the approximation is perfect). The gradient of this approximation is given by ∇ wJˆ(w) = H (w − w∗).
(7.13)
where, again, H is the Hessian matrix of J with respect to w evaluated at w ∗. We will also make the further simplifying assumption that the Hessian is diagonal, H = diag([γ1 , . . . , γN ]), where each γ i > 0. With this rather restrictive assumption, the solution of the minimum of the L1 regularized loss function decomposes into a system of equations of the form: 1 J˜(w; X, y) = γi(wi − w ∗i )2 + β|wi |. 2 Which admits an optimal solution (for each dimension i) in the following form: wi = sign(w ∗i ) max(|w ∗i| − 2
β , 0) γi
As with L2 regularization, we could consider regularizing the parameters to a value that is (o) not zero, but instead to some parameter value that case the L1 regularization would P w . In (o) (o) introduce the term Ω(θ) = ||w − w ||1 = β i |wi − wi |. 197
CHAPTER 7. REGULARIZATION
w2
w2 ∗
w∗
w
w ˜
w ˜
w1
w1
Figure 7.2: An illustration of the effect of L 1 regularization (RIGHT) on the value of the 2 optimal W , in comparison to the effect of L regularization (LEFT).
Let’s consider the situation where w ∗i > 0 for all i, there are two possible outcomes. Case 1: w∗i ≤ βγi , here the optimal value of wi under the regularized objective is simply wi = 0, this occurs because the contribution of J (w; X, y) to the regularized objective J˜(w; X, y) is overwhelmed—in direction i, by the L1 regularization which pushes the value of wi to zero. Case 2: w∗i > βγi , here the regularization does not move the optimal value of w to zero but instead it just shifts it in that direction by a distance equal to γβi . This is illustrated in Fig. 7.2. In comparison to L2 regularization, L 1 regularization results in a solution that is more sparse. Sparsity in this context implies that there are free parameters of the model that—through L 1 regularization—with an optimal value (under the regularized objective) of zero. As we discussed, for each element i of the parameter vector, this happened when w∗i ≤ βγi . Comparing this to the situation for L2 regularization, where (under the same assumptions of a diagonal Hessian H ) we i get wL 2 = γiγ+α w∗ , which is nonzero as long as w∗ is nonzero. In Fig. 7.2, we see that even when the optimal value of w is nonzero, L1 regularization acts to punish small values of parameters just as harshly as larger values, leading to optimal solutions with more parameters having value zero and more larger valued parameters. The sparsity property induced by L1 regularization has been used extensively as a feature selection mechanism. In particular, the well known LASSO Tibshirani (1995) (least absolute shrinkage and selection operator) model integrates an L1 penalty with a linear model and a least squares cost function. Finally, L 1 is known as the only norm that is both sparsifying and convex for non-degenerative problems 3 . 3
For degenerative problems, where more than one solution exists, L2 regularization can find the “sparse” solution in the sense that redundant parameters shrink to zero.
198
CHAPTER 7. REGULARIZATION
7.2.3
Bayesian Interpretation of the Parameter Norm Penalty
Parameter norm penalties are often amenable to being interpreted as a Bayesian prior. Recall that parameter norm penalties are effected by adding a term Ω(w) to the unregularized loss function J . J˜ (w; X, y) = J (w; X, y) + αΩ(w)
(7.14)
where α is a hyperparameter that weighs the relative contribution of the norm penalty term. We can view the minimization of the regularized loss function above as equivalent to finding the maximum a posteriori (MAP) estimate of the parameters: log p(w | X , y) ∝ log p(y | X , w) + log p(w), where the inregularized J (w; X, y) is taken as the log likelihood and the regularization term αΩ(w) plays the role of the parameter prior distribution. Difference choices of regularizers correspond to different priors. In the case of L 2 regularization, minimizing with αΩ(w) = α2 kwk22 , is functionally equivalent to maximizing the log of the posterior distribution (or minimizing the negative log posterior) where the prior is given by a Gaussian distribution. 1 1 d log p(w; µ, Σ) = − (w − µ)>Σ−1 (w − µ) − log |Σ| − log(2π) 2 2 2 where d is the dimension of w. Ignoring terms that are not function of w (and therefore do not effect the MAP value), we can see that the by choosing µ = 0 and Σ−1 = αI, we recover the functional form of L 2 regularization: log p(w; µ, Σ) ∝ α 2 2 2 kwk2 . Thus L regularization can be considered assuming independent Gaussian prior distibutions over all the model parameters, each with precision (i.e. the inverse of variance) α. α kwi k, is equivalent to For L1 regularization, minimizing with αΩ(w) = P i maximizing the log of the posterior distribution with independent Laplace distributions (also known as a double-sided exponential distribution) as priors over the individual elements of w. log p(w; µ, η) =
X
Laplace(µi , η i) =
i
X |wi − µi| − − log (2ηi ) ηi i
One again we can ignore the second term here because it does not depend on the elements of w, so L1 regularization is equivalent to MAP estimate with a prior P given by i Laplace(0, λ−1 ).
199
CHAPTER 7. REGULARIZATION
7.3
Classical Regularization as Constrained Optimization
Classical regularization adds a penalty term to the training objective: J˜(θ; X, y) = J (θ; X, y) + αΩ(θ). Recall from Sec. 4.4 that we can minimize a function subject to constraints by constructing a generalized Lagrange function (see 4.4), consisting of the original objective function plus a set of penalties. Each penalty is a product between a coefficient, called a Karush–Kuhn–Tucker (KKT) multiplier 4 , and a function representing whether the constraint is satisfied. If we wanted to constrain Ω(θ) to be less than some constant k, we could construct a generalized Lagrange function L(θ, α; X , y) = J (θ; X, y) + α(Ω(θ) − k). The solution to the constrained problem is given by θ ∗ = min max L(θ, α). θ α,α≥0
Solving this problem requires modifying both θ and α. Specifically, α must increase whenever ||θ||p > k and decrease whenever ||θ|| p < k. However, after we have solved the problem, we can fix α ∗ and view the problem as just a function of θ: θ∗ = min L(θ, α∗ ) = min J (θ; X, y) + α∗Ω(θ). θ
θ
This is exactly the same as the regularized training problem of minimizing J˜. Note that the value of α ∗ does not directly tell us the value of k. In principle, one can solve for k, but the relationship between k and α ∗ depends on the form of J . We can thus think of classical regularization as imposing a constraint on the weights, but with an unknown size of the constraint region. Larger α will result in a smaller constraint region, and smaller α will result in a larger constraint region. Sometimes we may wish to use explicit constraints rather than penalties. As described in Sec. 4.4, we can modify algorithms such as stochastic gradient descent to take a step downhill on J (θ) and then project θ back to the nearest point that satisfies Ω(θ) < k. This can be useful if we have an idea of what value of k is appropriate and do not want to spend time searching for the value of α that corresponds to this k. Another reason to use explicit constraints and reprojection rather than enforcing constraints with penalties is that penalties can cause non-convex optimization 4
KKT multipliers generalize Lagrange multipliers to allow for inequality constraints. 200
CHAPTER 7. REGULARIZATION
procedures to get stuck in local minima corresponding to small θ. When training neural networks, this usually manifests as neural networks that train with several “dead units”. These are units that do not contribute much to the behavior of the function learned by the network because the weights going into or out of them are all very small. When training with a penalty on the norm of the weights, these configurations can be locally optimal, even if it is possible to significantly reduce J by making the weights larger. (This concern about local minima obviously does not apply when J˜ is convex) Finally, explicit constraints with reprojection can be useful because they impose some stability on the optimization procedure. When using high learning rates, it is possible to encounter a positive feedback loop in which large weights induce large gradients which then induce a large update to the weights. If these updates consistently increase the size of the weights, then θ rapidly moves away from the origin until numerical overflow occurs. Explicit constraints with reprojection allow us to terminate this feedback loop after the weights have reached a certain magnitude. Hinton et al. (2012c) recommend using constraints combined with a high learning rate to allow rapid exploration of parameter space while maintaining some stability. TODO how L2 penalty is equivalent to L2 constraint (with unknown value), L1 penalty is equivalent to L1 constraint maybe move the earlier L2 regularization figure to here, now that the sublevel sets will make more sense? show the shapes induced by the different norms separate L2 penalty on each hidden unit vector is different from L2 penalty on all theta; is equivalent to a penalty on the max across columns of the column norms
7.4
Regularization and Under-Constrained Problems
In some cases, regularization is necessary for machine learning problems to be properly defined. Many linear models in machine learning, including linear regression and PCA, depend on inverting the matrix X > X. This is not possible whenever X> X is singular. This matrix can be singular whenever the data truly has no variance in some direction, or when there are fewer examples (rows of X ) than input features (columns of X ). In this case, many forms of regularization correspond to inverting X >X +αI instead. This regularized matrix is guaranteed to be invertible. These linear problems have closed form solutions when the relevant matrix is invertible. It is also possible for a problem with no closed form solution to be underdetermined. For example, consider logistic regression applied to a problem where the classes are linearly separable. If a weight vector w is able to achieve perfect classification, then 2w will also achieve perfect classification and higher 201
CHAPTER 7. REGULARIZATION
likelihood. An iterative optimization procedure like stochastic gradient descent will continually increase the magnitude of w and, in theory, will never halt. In practice, a numerical implementation of gradient descent will eventually reach sufficiently large weights to cause numerical overflow, at which point its behavior will depend on how the programmer has decided to handle values that are not real numbers. Most forms of regularization are able to guarantee the convergence of iterative methods applied to underdetermined problems. For example, weight decay will cause gradient descent to quit increasing the magnitude of the weights when the slope of the likelihood is equal to the weight decay coefficient. Likewise, early stopping based on the validation set classification rate will cause the training algorithm to terminate soon after the validation set classification accuracy has stopped increasing. Even if the problem is linearly separable and there is no overfitting, the validation set classification accuracy will eventually saturate to 100%, resulting in termination of the early stopping procedure. The idea of using regularization to solve underdetermined problems extends beyond machine learning. The same idea is useful for several basic linear algebra problems. As we saw in Chapter 2.9, we can solve underdetermined linear equations using the Moore-Penrose pseudoinverse. One definition of the pseudo-inverse X + of a matrix X is to perform linear regression with an infinitesimal amount of L2 regularization: X + = lim (X>X > + αI)−1 X>. α&0
When a true inverse for X exists, then w = X +y returns the weights that exactly solve the regression problem. When X is not invertible because no exact solution exists, this returns the w corresponding to the least possible mean squared error. When X is not invertible because many solutions exactly solve the regression problem, this returns w with the minimum possible L 2 norm. Recall that the Moore-Penrose pseudoinverse can be computed easily using the singular value decomposition. Because the SVD is robust to underdetermined problems resulting from too few observations or too little underlying variance, it is useful for implementing stable variants of many closed-form linear machine learning algorithms. The stability of these algorithms can be viewed as a result of applying the minimum amount of regularization necessary to make the problem become determined.
202
CHAPTER 7. REGULARIZATION
7.5
Dataset Augmentation
CHAPTER 7. REGULARIZATION
such as noise to the hidden units, as well as the inputs. This can be viewed as augmenting the dataset as seen by the deeper layers. When reading machine learning research papers, it is important to take the effect of dataset augmentation into account. Often, hand-designed dataset augmentation schemes can dramatically reduce the generalization error of a machine learning technique. It is important to look for controlled experiments. When comparing machine learning algorithm A and machine learning algorithm B, it is necessary to make sure that both algorithms were evaluated using the same hand-designed dataset augmentation schemes. If algorithm A performs poorly with no dataset augmentation and algorithm B performs well when combined with numerous synthetic transformations of the input, then it is likely the synthetic transformations and not algorithm B itself that cause the improved performance. Sometimes the line is blurry, such as when a new machine learning algorithm involves injecting noise into the inputs. In these cases, it’s best to consider how generally applicable to the new algorithm is, and to make sure that pre-existing algorithms are re-run in as similar of conditions as possible. TODO– tangent propagation NOTE: there is already some coverage of TangetProp in manifold.tex, it may not be necessary to exhaustively describe all known forms of regularization in this chapter
7.6
Classical Regularization as Noise Robustness
In the machine learning litterature, there have been two ways that noise has been used as part of a regularization strategy. The first and most popular way is by adding noise to the input. While this can be interpreted simply as form of dataset augmentation (as described above in Sec. 7.5), we can also interpret it as being equivalent to more traditional forms of regularization. The second way that noise has been used in the service of regularizing models is by adding it to the weights. This technique has been used primarily in the context of recurrent neural networks (Jim et al., 1996; Graves, 2011a). This can been interpreted as a stochastic implementation of a Bayesian inference over the weights. Under the Bayesian treatment of learning would consider the model weights to be uncertain and representable via a probability distribution that reflects this uncertainty. Adding noise to the weights is a practical, stochatic way to reflect this uncertainty (Graves, 2011a). In this section, we review these two strategies and provide some insight into now noise can act to regularize the model.
204
CHAPTER 7. REGULARIZATION
7.6.1
Injecting Noise at the Input
Some classical regularization techniques can be derived in terms of training on noisy inputs. 5. Let us consider a regression setting, where we are interested in learning a model yˆ(x) that maps a set of features x to a scalar. The cost function we will use is the least-squares error between the model prediction yˆ(x) and the true value y: 2 J = Ep(x,y) (ˆ (7.15) y(x) − y) ,
where we are given a dataset of m input / output pairs {(x(1) , y (1) ), . . . , (x(m) , y(m) )}. Our training objective is to minimize the loss function, which in this case is given by the least-squares error between the model prediction ˆ y(x) and the true label (1) (m) y ( where y(x) = [y (x), . . . , y (x)]). Now consider that with each input presentation to the model we also include a random perturbation ∼ (0, ν I), so that the error function becomes y(x + ) − y)2 J˜x = E p(x,y,) (ˆ = E p(x,y,) yˆ 2 (x + ) − 2y yˆ(x + ) + y2 = E p(x,y,) yˆ 2 (x + ) − 2E p(x,y,) [y yˆ(x + )] + E p(x,y,) y 2 (7.16) Assuming small noise, we can consider the taylor series expansion of yˆ(x + ) around yˆ(x). 1 yˆ(x + ) = yˆ(x) + > ∇x yˆ(x) + >∇2x yˆ(x) + O(3 ) 2
(7.17)
Substituting this approximation for yˆ(x + ) into the objective function (Eq. 7.16) and using the fact that E p()[] = 0 and that Ep()[ >] = νI to simplify 6, we get: " 2 # 1 J˜x ≈ Ep(x,y,) yˆ(x) + >∇ x yˆ(x) + >∇ 2xyˆ(x) 2 1 > 2 > − 2Ep(x,y,) y yˆ(x) + y ∇ xy yˆ(x) + y ∇x yˆ(x) + Ep(x,y,) y 2 2 2 y(x) − y) 2 + Ep(x,y,) yˆ(x)>∇ 2xyˆ(x) + >∇ xyˆ(x) + O(3 ) = Ep(x,y,) (ˆ 1 > 2 − 2Ep(x,y,) y ∇xyˆ(x) 2 = J + ν Ep(x,y) (ˆ (7.18) y(x) − y)∇2xyˆ(x) + νE p(x,y) k∇x yˆ(x)k2 5
The analysis in this section is mainly based on that in Bishop (1995a,b) In this derivation we have used two properties of the trace operator: (1) that a scalar is equal to its trace; (2) that, for a square matrix AB, Tr(AB) = Tr(BA). These are discussed in Sec. 2.10. 6
205
CHAPTER 7. REGULARIZATION
If we consider minimizing this objective function, by taking the functional gradient of yˆ(x) and setting the result to zero, we can see that yˆ(x) = Ep(y|x) [y] + O(ν ).
This implies that the expectation in the second last term in Eq. 7.18, Ep(x,y) (ˆ y(x) − y)∇2x yˆ(x) reduces to O(ν) because the expectation of the difference ( yˆ(x) − y) is reduces to O(ν). This leaves us with the objective function of the form y(x) − y) 2 + νE p(x,y) k∇xyˆ(x)k 2 + O(ν2 ). J˜x = Ep(x,y) (ˆ For small ν, the minimization of J with added noise on the input (with covariance νI ) is equivalent to minimization of J with an additional regularization term given by νE p(x,y) k∇ x yˆ(x)k2 . Considering the behavior of this regularization term, we note that it has the effect of penalizing large gradients of the function yˆ(x). That is, it has the effect of reducing the sensitivity of the output of the network with respect to small variations in its input x. We can interpret this as attempting to build in some local robustness into the model and thereby promote generalization. We note also that for linear networks, this regularization term reduces to simple weight decay (as discussed in Sec. 7.2.1).
7.6.2
Injecting Noise at the Weights
Rather than injecting noise as part of the input, one could also consider adding noise directly to the model parameters. As we shall see, this can also be interpreted as equivalent (under some assumptions) to a more traditional form of regularization. Adding noise to the weights has been shown to be a effective regularization strategy in the context of recurrent neural networks7 Jim et al. (1996); Graves (2011b). In the following, we will present an analysis of the effect of weight noise on a standard feedforward neural network (as introduced in Chapter 6). As we did in the last section, we again consider the regression setting, where we wish to train a function yˆ(x) that maps a set of features x to a scalar using the least-squares cost function between the model predictions yˆ(x) and the true values y: J = Ep(x,y) (ˆ (7.19) y(x) − y)2 .
We again assume we are given a dataset of m input / output pairs {(x(1) , y(1) ), . . . , (x (m) , y (m))} We now assume that with each input presentation we also include a random perturbation W ∼ (0, ηI ) of the network weights. Let us imagine that we have 7
Recurrent neural networks will be discussed in detail in Chapter 10 206
CHAPTER 7. REGULARIZATION
a standard L-layer MLP, we denote the perturbed model as yˆW (x). Despite the injection of noise, we are still interested in minimizing the squared error of the output of the network. The objective function thus becomes: J˜W = Ep(x,y,W ) (ˆ yW ) − y)2 = Ep(x,y,W ) yˆ2 W (x) − 2y yˆ W (x) + y2 (7.20) Assuming small noise, we can consider the taylor series expansion of yˆW (x) around the unperturbed function yˆ(x). >
yˆ W (x) = yˆ(x) + W ∇W yˆ(x) +
1 > 2 3 ∇ yˆ(x)W + O( W ) 2 W W
(7.21)
From here, we follow the same basic strategy that was laid-out in the pervious section in analyzing the effect of adding noise to the input. That is, we substitute the taylor series expansion of yˆW (x) into the objective function in Eq. 7.20. " 2 # 1 2 J˜W ≈ Ep(x,y, W ) yˆ(x) + > ˆ(x) + > ˆ(x)W W ∇W y W ∇W y 2 1 > 2 > + Ep(x,y, W ) y2 Ep(x,y, W ) −2y yˆ(x) + W ∇W yˆ(x) + W ∇W yˆ(x)W 2 h i 1 > 2 2 = Ep(x,y, W ) (ˆ y(x) − y) − 2Ep(x,y,W ) yW ∇ W yˆ(x) 2 2 > 2 > 3 + E p(x,y,W ) yˆ(x)W ∇W yˆ(x)W + W ∇W yˆ(x) + O(W ) . (7.22) (7.23)
Where we have used the fact that E W) W = 0 to drop terms that are linear in W . Incorporating the assumption that E W ) 2W = ηI , we have: J˜W ≈ J + ν Ep(x,y) (ˆ (7.24) y(x) − y)∇ 2W yˆ(x) + νEp(x,y) k∇W yˆ(x)k2 Again, if we consider minimizing this objective function, we can see that the optimal value of yˆ(x) is: yˆ(x) = Ep(y|x) [y] + O(η), y(x) − y)∇2W yˆ(x) , implying that the expectation in the middle term in Eq. 7.24, Ep(x,y) (ˆ reduces to O(η) because the expectation of the difference (ˆ y(x) − y) is reduces to O(η). This leaves us with the objective function of the form J˜ W = Ep(x,y) (ˆ y(x) − y)2 + ηE p(x,y) k∇W yˆ(x)k 2 + O(η2 ).
207
CHAPTER 7. REGULARIZATION
For small η, the minimization of J with added weight noise (with covariance ηI ) is equivalent to minimization of J with an additional regularization term: ηEp(x,y) k∇W yˆ(x)k 2 . This form of regularization encourages the parameters to go to regions of parameter space that have relatively small gradients. In other words, it pushes the model into regions where the model is relatively insensitive to small variations in the weights. Regularization strategies with this kind of behaviour have been considered before TODO restore this broken citation when ˆ(y)(x) = it is fixed In the simplified case of linear regression (where, for instance, w > x + b), this regularization term collapses into ηE p(x) kxk 2 , which is not a function of parameters and therefore does not contribute to the gradient of J˜W w.r.t the model parameters.
7.7
Early Stopping as a Form of Regularization
When training large models with high capacity, we often observe that training error decreases steadily over time, but validation set error begins to rise again. See Fig. 7.3 for an example of this behavior. This behavior occurs very reliably. This means we can obtain a model with better validation set error (and thus, hopefully better test set error) by returning to the parameter setting at the point in time with the lowest validation set error. Instead of running our optimization algorithm until we reach a (local) minimum, we run it until the error on the validation set has not improved for some amount of time. Every time the error on the validation set improves, we store a copy of the model parameters. When the training algorithm terminates, we return these parameters, rather than the latest parameters. This procedure is specified more formally in Alg. 7.1. This strategy is known as early stopping. It is probably the most commonly used form of regularization in deep learning. Its popularity is due both to its effectiveness and its simplicity. One way to think of early stopping is as a very efficient hyperparameter selection algorithm. In this view, the number of training steps is just another hyperparameter. We can see in Fig. 7.3 that this hyperparameter has a U-shaped validation set performance curve, just like most other model capacity control parameters. In this case, we are controlling the effective capacity of the model by determining how many steps it can take to fit the training set precisely. Most of the time, setting hyperparameters requires an expensive guess and check process, where we must set a hyperparameter at the start of training, then run training for several steps to see its effect. The “training time” hyperparameter is unique in that by definition a single run of training tries out many values of the hyperparameter. The only significant cost to choosing this hyperparameter automatically via early stopping is running the validation set evaluation periodically during 208
CHAPTER 7. REGULARIZATION
Figure 7.3: Learning curves showing how the negative log likelihood loss changes over time. In this example, we train a maxout network on MNIST, regularized with dropout. Observe that the training loss decreases consistently over time, but the validation set loss eventually begins to increase again.
209
CHAPTER 7. REGULARIZATION
Algorithm 7.1 The early stopping meta-algorithm for determining the best amount of time to train. This meta-algorithm is a general strategy that works well with a variety of training algorithms and ways of quantifying error on the validation set. Let n be the number of steps between evaluations. Let p be the “patience,” the number of times to observe worsening validation set error before giving up. Let θo be the initial parameters. θ ← θo i←0 j←0 v ←∞ θ∗ ← θ i∗ ← i while j < p do Update θ by running the training algorithm for n steps. i ← i +n v0 ← ValidationSetError(θ) if v0 < v then j←0 θ∗ ← θ i∗ ← i v ← v0 else j ←j+1 end if end while Best parameters are θ∗, best number of training steps is i∗ training. An additional cost to early stopping is the need to maintain a copy of the best parameters. This cost is generally negligible, because it is acceptable to store these parameters in a slower and larger form of memory (for example, training in GPU memory, but storing the optimal parameters in host memory or on a disk drive). Since the best parameters are written to infrequently and never read during training, these occasional slow writes are have little effect on the total training time. Early stopping is a very inobtrusive form of regularization, in that it requires no change the underlying training procedure, the objective function, or the set of allowable parameter values. This means that it is easy to use early stopping with210
CHAPTER 7. REGULARIZATION
out damaging the learning dynamics. This is in contrast to weight decay, where one must be careful not to use too much weight decay and trap the network in a bad local minima corresponding to a solution with pathologically small weights. Early stopping may be used either alone or in conjunction with other regularization strategies. Even when using regularization strategies that modify the objective function to encourage better generalization, it is rare for the best generalization to occur at a local minimum of the training objective. Early stopping requires a validation set, which means some training data is not fed to the model. To best exploit this extra data, one can perform extra training after the initial training with early stopping has completed. In the second, extra training step, all of the training data is included. There are two basic strategies one can use for this second training procedure. One strategy is to initialize the model again and retrain on all of the data. In this second training pass, we train for the same number of steps as the early stopping procedure determined was optimal in the first pass. There are some subtleties associated with this procedure. For example, there is not a good way of knowing whether to retrain for the same number of parameter updates or the same number of passes through the dataset. On the second round of training, each pass through the dataset will require more parameter updates because the training set is bigger. Usually, if overfitting is a serious concern, you will want to retrain for the same number of epochs, rather than the same number of parameter udpates. If the primary difficulty is optimization rather than generalization, then retraining for the same number of parameter updates makes more sense (but it’s also less likely that you need to use a regularization method like early stopping in the first place). This algorithm is described more formally in Alg. 7.2. Algorithm 7.2 A meta-algorithm for using early stopping to determine how long to train, then retraining on all the data. Let X(train) and y(train) be the training set Split X(train) and y(train) into X (subtrain) , y(subtrain) , X(valid) , y(valid) Run early stopping (Alg. 7.1) starting from random θ using X(subtrain) and y(subtrain) for training data and X (valid) and y (valid) for validation data. This returns i∗, the optimal number of steps. Set θ to random values again Train on X(train) and y (train) for i∗ steps. Another strategy for using all of the data is to keep the parameters obtained from the first round of training and then continue training but now using all of the data. At this stage, we now no longer have a guide for when to stop in terms of a number of steps. Instead, we can monitor the loss function on the validation set, 211
CHAPTER 7. REGULARIZATION
w2
w2 ∗
w∗
w
w(τ )
w ˜
w1
w1
Figure 7.4: An illustration of the effect of early stopping (Right) as a form of regularization on the value of the optimal w, as compared to L2 regularization (Left) discussed in Sec. 7.2.1.
and continue training until it falls below the value of the training set objective at which the early stopping procedure halted. This strategy avoids the high cost of retraining the model from scratch, but is not as well-behaved. For example, there is not any guarantee that the objective on the validation set will ever reach the target value, so this strategy is not even guaranteed to terminate. This procedure is presented more formally in Alg. 7.3. Algorithm 7.3 A meta-algorithm for using early stopping to determining at what objective value we start to overfit, then continuing training. Let X(train) and y(train) be the training set Split X(train) and y(train) into X (subtrain) , y(subtrain) , X(valid) , y(valid) Run early stopping (Alg. 7.1) starting from random θ using X(subtrain) and y(subtrain) for training data and X (valid) and y (valid) for validation data. This updates θ ← J (θ, X(subtrain) , y(subtrain) ) while J (θ, X (valid), y (valid) ) > do Train on X (train) and y(train) for n steps. end while
Early stopping and the use of surrogate loss functions: A useful property of early stopping is that it can help to mitigate the problems caused by a mismatch between the surrogate loss function whose gradient we follow downhill and the underlying performance measure that we actually care about. For example, 01 classification loss has a derivative that is zero or undefined everywhere, so it is not appropriate for gradient-based optimization. We therefore train with a 212
CHAPTER 7. REGULARIZATION
surrogate such as the log likelihood of the correct class label. However, 0-1 loss is inexpensive to compute, so it can easily be used as an early stopping criterion. Often the 0-1 loss continues to decrease for long after the log likelihood has begun to worsen on the validation set. TODO: figures. in figures/regularization, I have extracted the 0-1 loss but only used the nll for the regularization chapter’s figures. Early stopping is also useful because it reduces the computational cost of the training procedure. It is a form of regularization that does not require adding additional terms to the surrogate loss function, so we get the benefit of regularization without the cost of any additional gradient computations. It also means that we do not spend time approaching the exact local minimum of the surrogate loss. How early stopping acts as a regularizer: So far we have stated that early stopping is a regularization strategy, but we have only backed up this claim by showing learning curves where the validation set error has a U-shaped curve. What is the actual mechanism by which early stopping regularizes the model? 8 Early stopping has the effect of restricting the optimization procedure to a relatively small volume of parameter space in the neighborhood of the initial parameter value θo . More specifically, imagine taking τ optimization steps (corresponding to τ training iterations) and taking η as the learning rate. We can view the product ητ as the reciprocal of a regularization parameter. Assuming the gradient is bounded, restricting both the number of iterations and the learning rate limits the volume of parameter space reachable from θo . Indeed, we can show how — in the case of a simple linear model with a quadratic error function and simple gradient descent—early stopping is equivalent to L2 regularization as seen in Section 7.2.1. In order to compare with classical L 2 regularization, we again consider the simple setting where we will take as the parameters to be optimized as θ = w and we take a quadratic approximation to the objective function J in the neighborhood of the empirically optimal value of the weights w ∗. 1 Jˆ(θ) = J (w ∗ ) + (w − w∗) >H (θ − θ∗) 2
(7.25)
where, as before, H is the Hessian matrix of J with respect to w evaluated at w ∗ . Given the assumption that w ∗ is a minimum of J (w), we can consider that H is positive semi-definite and that the gradient is given by: ∇ wJˆ(w) = H (w − w∗). 8
(7.26)
Material for this section was taken from Bishop (1995a); Sj¨ oberg and Ljung (1995), for further details regarding the interpretation of early-stopping as a regularizer, please consult these works. 213
CHAPTER 7. REGULARIZATION
Let us consider initial parameter vector chosen at the origin, i.e. w(0) = 0. We will consider updating the parameters via gradient descent: w(τ ) = w(τ −1) − η∇ wJ (w(τ −1) )
(7.27)
w(τ ) − w∗ = (I − ηH )(w(τ −1) − w∗)
(7.29)
= w(τ −1) − ηH (w (τ −1) − w ∗ )
(7.28)
Let us now consider this expression in the space of the eigenvectors of H , i.e. we will again consider the eigendecomposition of H : H = QΛQ> , where Λ is a diagonal matrix and Q is an ortho-normal basis of eigenvectors. w (τ ) − w∗ = (I − ηQΛQ> )(w(τ −1) − w ∗ )
Q> (w(τ ) − w∗ ) = (I − ηΛ)Q>(w (τ −1) − w∗)
Assuming w0 = 0, and that |1 − ηλ i| < 1, we have after τ training updates, (TODO: derive the expression below). Q > w (τ ) = [I − (I − ηΛ) τ]Q>w ∗ .
(7.30)
Now, the expression for Q> w ˜ in Eqn. 7.9 for L2 regularization can rearrange as: Q> w ˜ = (Λ + αI )−1 ΛQ> w∗ >
−1
>
Q w ˜ = [I − (Λ + αI ) α]Q w
(7.31)
Comparing Eqns 7.30 and 7.31, we see that if (I − ηΛ) τ = (Λ + αI )−1 α, then L 2 regularization and early stopping can be seen to be equivalent (at least under the quadratic approximation of the objective function). Going even further, by taking logs and using the series expansion for log(1 + x), we can conclude that if all λi are small (i.e. ηλi 1 and λ i/α 1) then τ ≈ 1/ηα.
(7.32)
That is, under these assumptions, the number of training iterations τ plays a role inversely proportional to the L 2 regularization parameter. Parameter values corresponding to directions of significant curvature (of the loss) are regularized less than directions of less curvature. Of course, in the context of early stopping, this really means that parameters that correspond to directions of significant curvature tend to learn early relative to parameters corresponding to directions of less curvature. 214
CHAPTER 7. REGULARIZATION
7.8
Parameter Tying and Parameter Sharing
TODO(Aaron): start with bayesian perspective (parameters should be close), add practical constraints to get parameter sharing. Thus far, in this chapter, when we have discussed adding constraints or penalties to the parameters, we have always does so with respect to a fixed region or point. For example, L 2 regularization (or weight decay) penalizes model parameters for deviating from the fixed value of zero. Sometimes rather than apply a penalty for deviation from a fixed point in parameter space, we wish to express our prior knowledge about Convolutional Neural Networks By far the most popular and extensive use of parameter sharing occurs in the convolutional neural networks (CNNs). CNNs will be discussed in detail in Chapter 9, here we note only how they take advantage of parameter sharing. CNNs, as we know them today, were originally developed for application to computer visionLeCun et al. (1989). Natural images have a particular statistical property that they are invariant under 2-dimensional translation (in the image plane). This property is a natural consequence of the image generation process: the same scene can be photographed twice with the center of one image being a translation of the center of the other image (i.e. there is no natural origin in the image plane). CNNs were designed to take this property into account by sharing parameters across the image plane. If the image shares its statistical structure across the image plane, then so too should the model. Feature detectors found to be useful in one region should be generalized across all regions. Parameter sharing has allowed CNNs to dramatically lower the number of unique model parameters and have allowed them to significantly increase network sizes without requiring a corresponding increase in training data. It remains one of the best examples of how to effectively incorporate domain knowledge into the network architecture.
7.9
Sparse Representations
TODO(Aaron) Most deep learning models have some concept of representations.
7.10
Bagging and Other Ensemble Methods
Bagging (short for bootstrap aggregating) is a technique for reducing generalization error by combining several models (Breiman, 1994). The idea is to train several different models separately, then have all of the models vote on the output for 215
CHAPTER 7. REGULARIZATION
test examples. This is an example of a general strategy in machine learning called model averaging. Techniques employing this strategy are known as ensemble methods. The reason that model averaging works is that different models will usually make different errors on the test set to some extent. Consider for example a set of k regression models. Suppose that each model makes an error i on each example, with the errors drawn from a zero-mean multivariate normal distribution with variances E[2i ] = v and covariances E[i j ] = c. Then the error made by the average prediction of all the ensemble models is 1 P i i . The expected squared error is k !2 1X E[ i ] k i
=
1 X 2 X E[ i + i j ] k2 i j6=i
k−1 1 v+ c. k k In the case where the errors are perfectly correlated and c = v, this reduces to v, and the model averaging does not help at all. But in the case where the errors are perfectly uncorrelated and c = 0, then the expected error of the ensemble is only 1k v. This means that the expected squared error of the ensemble decreases linearly with the ensemble size. In other words, on average, the ensemble will perform at least as well as any of its members, and if the members make independent errors, the ensemble will perform significantly better than of its members. Different ensemble methods construct the ensemble of models in different ways. For example, each member of the ensemble could be formed by training a completely different kind of model using a different algorithm or cost function. Bagging is a method that allows the same kind of model and same kind of training algorithm and cost function to be reused several times. Specifically, bagging involves constructing k different datasets. Each dataset has the same number of examples as the original dataset, but each dataset is constructed by sampling with replacement from the original dataset. This means that, with high probability, each dataset is missing some of the examples from the original dataset and also contains several duplicate examples. Model i is then trained on dataset i. The differences between which examples are included in each dataset result in differences between the trained models. See Fig. 7.5 for an example. 216
CHAPTER 7. REGULARIZATION
Original dataset
First ensemble member First resampled dataset
8 Second ensemble member
Second resampled dataset
8 Figure 7.5: A cartoon depiction of how bagging works. Suppose we train an ’8’ detector on the dataset depicted above, containing an ’8’, a ’6’, and a ’9’. Suppose we make two different resampled datasets. The bagging training procedure is to construct each of these datasets by sampling with replacement. The first dataset omits the ’9’ and repeats the ’8’. On this dataset, the detector learns that a loop on top of the digit corresponds to an ’8’. On the second dataset, we repeat the ’9’ and omit the ’6’. In this case, the detector learns that a loop on the bottom of the digit corresponds to an ’8’. Each of these individual classification rules is brittle, but if we average their output then the detector is robust, achieving maximal confidence only when both loops of the ’8’ are present.
217
CHAPTER 7. REGULARIZATION
Neural networks reach a wide enough variety of solution points that they can often benefit from model averaging even if all of the models are trained on the same dataset. Differences in random initialization, random selection of minibatches, differences in hyperparameters, or different outcomes of non-deterministic implementations of neural networks are often enough to cause different members of the ensemble to make partially independent errors. Model averaging is an extremely powerful and reliable method for reducing generalization error. Its use is usually discouraged when benchmarking algorithms for scientific papers, because any machine learning algorithm can benefit substantially from model averaging at the price of increased computation and memory. For this reason, benchmark comparisons are usually made using a single model. Machine learning contests are usually won by methods using model averaging over dozens of models. A recent prominent example is the Netflix Grand Prize (Koren, 2009). Not all techniques for constructing ensembles are designed to make the ensemble more regularized than the individual models. For example, a technique called boosting constructs an ensemble with higher capacity than the individual models.
7.11
Dropout
Because deep models have a high degree of expressive power, they are capable of overfitting significantly. While this problem can be solved by using a very large dataset, large datasets are not always available. Dropout (Srivastava et al., 2014) provides a computationally inexpensive but powerful method of regularizing a broad family of models. Dropout can be thought of as a method of making bagging practical for neural networks. Bagging involves training multiple models, and evaluating multiple models on each test example. This seems impractical when each model is a neural network, since training and evaluating a neural network is costly in terms of runtime and storing a neural network is costly in terms of memory. Dropout provides an inexpensive approximation to training and evaluating a bagged ensemble of exponentially many neural networks. Specifically, dropout trains the ensemble consisting of all sub-networks that can be formed by removing units from an underlying base network. In most modern neural networks, based on a series of affine transformations and nonlinearities, we can effective remove a unit from a network by multiplying its state by zero. This procedure requires some slight modification for models such as radial basis function networks, which take the difference between the unit’s state and some reference value. Here, we will present the dropout algorithm in terms of multipli218
CHAPTER 7. REGULARIZATION
cation by zero for simplicity, but it can be trivially modified to work with other operations that remove a unit from the network. TODO–describe training algorithm, with reference to bagging TODO– include figures from IG’s job talk TODO– training doesn’t rely on the model being probabilistic TODO– describe inference algorithm, with reference to bagging TODO– inference does rely on the model being probabilistic. and specifically, exponential family? For many classes of models that do not have nonlinear hidden units, the weight scaling inference rule is exact. For a simple example, consider a softmax regression classifier with n input variables represented by the vector v: P (y = y | v) = softmax W >v + b . y
We can index into the family of sub-models by element-wise multiplication of the input with a binary vector d: P (y = y | v; d) = softmax W > (d v) + b . y
The ensemble predictor is defined by re-normalizing the geometric mean over all ensemble members’ predictions:
where
P ensemble(y = y | v) = P P˜ensemble (y = y | v) =
s
2
P˜ensemble (y = y | v) ˜ 0 y 0 Pensemble (y = y | v) Y
n
d∈{0,1}n
P (y = y | v; d).
To see that the weight scaling rule is exact, we can simplify P˜ensemble : s Y ˜ Pensemble(y = y | v) = 2 n P (y = y | v; d) d∈{0,1} n
=
s
2n
v u u n 2 = t n 2
softmax (W > (d v) + b)y
Y
>(d v) + b exp Wy,: P > y0 exp W y0 ,: (d v) + b
d∈{0,1} n
d∈{0,1}n
qQ
2n
=
Y
d∈{0,1}
d∈{0,1} n
rQ
n
> (d v) + b exp Wy,:
y 0 exp
W> y0 ,:(d v) + b
P 219
(7.33)
CHAPTER 7. REGULARIZATION
Because P˜ will be normalized, we can safely ignore multiplication by factors that are constant with respect to y: s Y ˜ (d v) + b P ensemble(y = y | v) ∝ 2 n exp W > y,: d∈{0,1}n
1 = exp n 2
X
d∈{0,1}n
= exp
> (d v) + b Wy,:
1 > W v+b 2 y,:
Substituting this back into equation 7.33 we obtain a softmax classifier with weights 12W . The weight scaling rule is also exact in other settings, including regression networks with conditionally normal outputs, and deep networks that have hidden layers without nonlinearities. However, the weight scaling rule is only an approximation for deep models that have non-linearities, and this approximation has not been theoretically characterized. Fortunately, it works well, empirically. Goodfellow et al. (2013a) found empirically that for deep networks with nonlinearities, the weight scaling rule can work better (in terms of classification accuracy) than Monte Carlo approximations to the ensemble predictor, even if the Monte Carlo approximation is allowed to sample up to 1,000 sub-networks. Srivastava et al. (2014) showed that dropout is more effective than other standard computationally inexpensive regularizers, such as weight decay, filter norm constraints, and sparse activity regularization. Dropout may also be combined with more expensive forms of regularization such as unsupervised pretraining to yield an improvement. As of this writing, the state of the art classification error rate on the permutation invariant MNIST dataset (not using any prior knowledge about images) is attained by a classifier that uses both dropout regularization and deep Boltzmann machine pretraining. However, combining dropout with unsupervised pretraining has not become a popular strategy for larger models and more challenging datasets. One advantage of dropout is that it is very computationally cheap. Using dropout during training requires only O(n) computation per example per update, to generate n random binary numbers and multiply them by the state. Depending on the implementation, it may also require O(n) memory to store these binary numbers until the backpropagation stage. Running inference in the trained model has the same cost per-example as if dropout were not used, though we must pay the cost of dividing the weights by 2 once before beginning to run inference on examples. 220
CHAPTER 7. REGULARIZATION
One significant advantage of dropout is that it does not significantly limit the type of model or training procedure that can be used. It works well with nearly any model that uses a distributed representation and can be trained with stochastic gradient descent. This includes feedforward neural networks, probabilistic models such as restricted Boltzmann machines (Srivastava et al., 2014), and recurrent neural networks (Pascanu et al., 2014a). This is very different from many other neural network regularization strategies, such as those based on unsupervised pretraining or semi-supervised learning. Such regularization strategies often impose restrictions such as not being able to use rectified linear units or max pooling. Often these restrictions incur enough harm to outweigh the benefit provided by the regularization strategy. Though the cost per-step of applying dropout to a specific model is negligible, the cost of using dropout in a complete system can be significant. This is because the size of the optimal model (in terms of validation set error) is usually much larger, and because the number of steps required to reach convergence increases. This is of course to be expected from a regularization method, but it does mean that for very large datasets (as a rough rule of thumb, dropout is unlikely to be beneficial when more than 15 million training examples are available, though the exact boundary may be highly problem dependent) it is often preferable not to use dropout at all, just to speed training and reduce the computational cost of the final model. When extremely few labeled training examples are available, dropout is less effective. Bayesian neural networks (Neal, 1996) outperform dropout on the Alternative Splicing Dataset (Xiong et al., 2011) where fewer than 5,000 examples are available (Srivastava et al., 2014). When additional unlabeled data is available, unsupervised feature learning can gain an advantage over dropout. TODO– ”Dropout Training as Adaptive Regularization” ? (Wager et al., 2013) TODO–perspective as L2 regularization TODO–connection to adagrad? TODO–semi-supervised variant TODO–Baldi paper (Baldi and Sadowski, 2013) TODO–DWF paper (Warde-Farley et al., 2014) TODO–using geometric mean is not a problem TODO–dropout boosting, it’s not just noise robustness TODO– what was the conclusion about mixability (DWF)? The stochasticity used while training with dropout is not a necessary part of the model’s success. It is just a means of approximating the sum over all sub-models. Wang and Manning (2013) derived analytical approximations to this marginalization. Their approximation, known as fast dropout resulted in faster convergence time due to the reduced stochasticity in the computation of the gradient. This method can also be applied at test time, as a more principled (but also more computationally expensive) approximation to the average over all sub-networks than the weight scaling approximation. Fast dropout has been 221
CHAPTER 7. REGULARIZATION
used to match the performance of standard dropout on small neural network problems, but has not yet yielded a significant improvement or been applied to a large problem. Dropout has inspired other stochastic approaches to training exponentially large ensembles of models that share weights. DropConnect is a special case of dropout where each product between a single scalar weight and a single hidden unit state is considered a unit that can be dropped (Wan et al., 2013). Stochastic pooling is a form of randomized pooling (see chapter 9.3) for building ensembles of convolutional networks with each convolutional network attending to different spatial locations of each feature map. So far, dropout remains the most widely used implicit ensemble method. TODO–improved performance with maxout units and probably ReLUs
7.12
CHAPTER 7. REGULARIZATION
7.13
In many cases, neural networks have begun to reach human performance when evaluated on an i.i.d. test set.. It is natural therefore to wonder whether these models have obtained a true human-level understanding of these tasks. In order to probe the level of understanding a network has of the underlying task, we can search for examples that the model misclassifies. Szegedy et al. (2014b) found that even neural networks that perform at human level accuracy have a nearly 100% error rate on examples that are intentionally constructed by using an optimization procedure to search for an input x0 near a data point x such that the model output is very different at x0. In many case, x 0 can be so similar to x that a human observer cannot tell the difference between the original example and the adversarial example, but the network can make highly different predictions. See Fig. 7.7 for an example. Adversarial examples have many implications, for example, in computer security, that are beyond the scope of this chapter. However, they are interesting in the context of regularization because one can reduce the error rate on the original i.i.d. test set by training on adversarially perturbed examples from the training set (Szegedy et al., 2014b). Goodfellow et al. (2014b) showed that one of the primary causes of these adversarial examples is excessive linearity. Neural networks are built out of primarily linear building blocks, and in some empirical experiments the overall function they implement proves to be highly linear as a result. These linear functions are easy to optimize. Unfortunately, the value of a linear function can change very rapidly if it has numerous inputs. If we change each input by , then a linear function with weights w can change by as much as |w|, which can be a very large amount of w is high-dimensional. Adversarial training discourages this highly sensitive locally linear behavior by encouraging the network to be locally constant in the neighborhood of the training data. This can be seen as a way of introducing the local smoothness prior into supervised neural nets. This phenomenon helps to illustrate the power of using a large function family in combination with aggressive regularization. Purely linear models, like logistic regression, are not able to resist adversarial examples because they are forced to be linear. Neural networks are able to represent functions that can range from nearly linear to nearly locally constant and thus have the flexibility to capture linear trends in the training data while still learning to resist local perturbation.
223
CHAPTER 7. REGULARIZATION
Y1
h1
Y2
h3
h2
X Figure 7.6: Multi-task learning can be cast in several ways in deep learning frameworks and this figure illustrates the common situation where the tasks share a common input but involve different target random variables. The lower layers of a deep network (whether it is supervised and feedforward or includes a generative component with downward arrows) can be shared across such tasks, while task-specific parameters can be learned on top of a shared representation (associated respectively with h1 and h 2 in the figure). The underlying assumption is that there exist a common pool of factors that explain the variations in the input X , while each task is associated with a subset of these factors. In the figure, it is additionally assumed that top-level hidden units are specialized to each task, while some intermediate-level representation is shared across all tasks. Note that in the unsupervised learning context, it makes sense for some of the top-level factors to be associated with none of the output tasks (h 3): these are the factors that explain some of the input variations but are not relevant for these tasks.
224
CHAPTER 7. REGULARIZATION
+ .007 ×
x “panda” 57.7% confidence
=
sign(∇x J (θ, x, y)) “nematode” 8.2% confidence
x+ sign(∇x J (θ, x, y)) “gibbon” 99.3 % confidence
Figure 7.7: A demonstration of adversarial example generation applied to GoogLeNet (Szegedy et al., 2014a) on ImageNet. By adding an imperceptibly small vector whose elements are equal to the sign of the elements of the gradient of the cost function with respect to the input, we can change GoogLeNet’s classification of the image. Reproduced with permission from Goodfellow et al. (2014b).
225
Chapter 8
Optimization for Training Deep Models Deep learning algorithms involve optimization in many contexts. For example, we often solve optimization problems analytically in order to prove that an algorithm has a certain property. Inference in a probabilistic model can be cast as an optimization problem. Of all of the many optimization problems involved in deep learning, the most difficult is neural network training. It is quite common to invest days to months of time on hundreds on machines in order to solve even a single instance of the neural network training problem. Because this problem is so important and so expensive, a specialized set of optimization techniques have been developed for solving it. This chapter presents these optimization techniques for neural network training. If you’re unfamiliar with the basic principles of gradient-based optimization, we suggest reviewing Chapter 4. That chapter includes a brief overview of numerical optimization in general. This chapter focuses on one particular case of optimization: minimizing a cost function J(X (train) , θ) with respect to the model parameters θ.
8.1
Optimization for Model Training
Optimization algorithms used for training of deep models differ from traditional optimization algorithms in several ways. Machine learning usually acts indirectly— we care about some performance measure P that we do not know how to directly influence, so instead we reduce some cost function J(θ) in hope that it will improve P . This is in contrast to pure optimization, where minimizing J is a goal in and of itself. Optimization algorithms for training deep models also typically include some specialization on the specific structure of machine learning cost functions. 226
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.1.1
Empirical Risk Minimization
Suppose that we have input feature x, targets y, and some loss function L(x, y). Our ultimate goal is to minimize Ex,y∼p(x,y) [L(x, y)]. This quantity is known as the risk. If we knew the true distribution p(x, y), this would be an optimization task solveable by an optimization algorithm. However, when we do not know p(x, y) but only have a training set of samples from it, we have a machine learning problem. The simplest way to convert a machine learning problem back into an optimization problem is to minimize the expected loss on the training set. This means replacing the true distribution p(x, y) with the empirical distribution pˆ(x, y) defined by the training set. We now minimize the empirical risk m 1 X L(x(i) , y (i) ) Ex,y∼ˆp(x,y) [L(x, y)] = m i=1
where m is the number of training examples. This process is known as empirical risk minimization. In this setting, machine learning is still very similar to straightforward optimization. Rather than optimizing the risk directly, we optimize the empirical risk, and hope that the risk decreases significantly as well. A variety of theoretical results establish conditions under which the true risk can be expected to decrease by various amounts. However, empirical risk minimization is prone to overfitting. Models with high capacity can simply memorize the training set. In many cases, empirical risk minimization is not really feasible. The most effective modern optimization algorithms are based on gradient descent, but many useful loss functions, such as 0-1 loss, have no useful derivatives (the derivative is either zero or undefined everywhere). These two problems mean that, in the context of deep learning, we rarely use empirical risk minimization. Instead, we must use a slightly different approach, in which the quantity that we actually optimize is even more different from the quantity that we truly want to optimize. TODO– make sure 0-1 loss is defined and in the index
8.1.2
Surrogate Loss Functions
TODO–coordinate with Yoshua / coordinate with MLP / ML chapters do we use term loss function = map from a specific example to a real number or do we use it interchangeably with objective function / cost function? it seems some literature uses ”loss function” in a very general sense while others use it to mean specifically a single-example cost that you can take the expectation of, etc. this terminology seems a bit sub-optimal since it relies a lot on using English words with essentially the same meaning to represent different things with precise technical meanings 227
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
are ”surrogate loss functions” specifically replacing the cost for an individual examples, or does this also include things like minimizing the empirical risk rather than the true risk, adding a regularization term to the likelihood terms, etc.? TODO– in some cases, surrogate loss function actually results in being able to learn more. for example, test 0-1 loss continues to decrease for a long time after train 0-1 loss has reached zero when training using log likelihood surrogate In some cases, using a surrogate loss function allows us to extract more information
8.1.3
Generalization
TODO– SGD on an infinite dataset optimizes the generalization error directly (note that SGD is not introduced until later so this will need to be presented carefully) TODO– A very important difference between optimization in general and optimization as we use it for training algorithms is that training algorithms do not usually halt at a local minimum. Instead, using a regularization method known as early stopping (see Sec. 7.7), they halt whenever overfitting begins to occur. This is often in the middle of a wide, flat region, but it can also occur on a steep part of the surrogate loss function. This is in contrast to general optimization, where converge is usually defined by arriving at a point that is very near a (local) minimum.
8.1.4
Batches and Minibatches
One aspect of machine learning algorithms that separates them from general optimization algorithms is that the objective function usually decomposes as a sum over the training examples. Optimization algorithms for machine learning typically compute each update to the parameters based on a subset of the terms of the objective function, not based on the complete objective function itself. For example, maximum likelihood estimation problems decompose into a sum over each example: TODO equation, using same format as in original maximum likelihood section, which isn’t written yet TODO stochastic gradient descent see 8.3.2 TODO examples can be redundant, so best computational efficiency comes from minibatches TODO too small of batch size -¿ bad use of multicore architectures TODO issues with Hessian being different for different batches, etc. TODO importance of shuffling shuffle-once versus shuffle per epoch todo: define the term “epoch”
228
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.1.5
Data Parallelism
TODO asynch implementations, hogwild, distbelief
8.2 8.2.1
Challenges in Optimization Local Minima
TODO check whether this is already covered in numerical.tex
8.2.2
Ill-Conditioning
TODO this is definitely already covered in numerical.tex
8.2.3
Plateaus, Saddle Points, and Other Flat Regions
The long-held belief that neural networks are hopeless to train because they are fraught with local minima has been one of the reasons for the “neural networks winter” in the 1995-2005 decade. Indeed, one can show that there may be an exponentially large number of local minima, even in the simplest neural network optimization problems (Sontag and Sussman, 1989; Brady et al., 1989; Gori and Tesi, 1992). Theoretical work has shown that saddle points (and the flat regions surrounding them) are important barriers to training neural networks, and may be more important than local minima. Explain (Dauphin et al., 2014; Choromanska et al., 2014)
8.2.4
Whereas the issues of ill-conditioning and saddle points discussed in the previous sections arise because of the second-order structure of the objective function (as a function of the parameters), neural networks involve stronger non-linearities which do not fit well with this picture. In particular, the second-order Taylor series approximation of the objective function yields a symmetric view of the landscape around the minimum, oriented according to the axes defined by the principal eigenvectors of the Hessian matrix. (TODO: REFER TO A PLOT FROM THE ILL-CONDITIONING SECTION WITH COUNTOURS OF VALLEY). Second-order methods and momentum or gradient-averaging methods introduced in Section 8.4 are able to reduce the difficulty due to ill-conditioning by increasing the size of the steps in the low-curvature directions (the “valley”, in Figure 8.1) and decreasing the size of the steps in the high-curvature directions (the steep sides of the valley, in the figure). 229
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Figure 8.1: The traditional view of the optimization difficulty in neural networks is inspired by the ill-conditioning problem in quadratic optimization: some directions have a high curvature (second derivative), corresponding to the rising sides of the valley, and other directions have a low curvature, corresponding to the smooth slope of the valley. Most second-order methods, as well as momentum or gradient averaging methods are meant to address that problem, by increasing the step size in the direction of the valley (where it’s most paying in the long run to go) and decreasing it in the directions of steep rise, which would otherwise lead to oscillations. The objective is to smoothly go down, staying at the bottom of the valley.
However, although classical second order methods can help, as shown in Figure 8.2, due to higher order derivatives, the objective function may have a lot more non-linearity, which often does not have the nice symmetrical shapes that the second-order “valley” picture builds in our mind. Instead, there are cliffs where the gradient rises sharply. When the parameters approach a cliff region, the gradient update step can move the learner towards a very bad configuration, ruining much of the progress made during recent training iterations.
230
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Figure 8.2: Contrary to what is shown in Figure 8.1, the cost function for highly nonlinear deep neural networks or for recurrent neural networks is typically not made of symmetrical sides. As shown in the figure, there are sharp non-linearities that give rise to very high derivatives in some places. When the parameters get close to such a cliff region, a gradient descent update can catapult the parameters very far, possibly ruining a lot of the optimization work that had been done. Figure graciously provided by Razvan Pascanu (Pascanu, 2014).
As illustrated in Figure 8.3, the cliff can be dangerous whether we approach it from above or from below, but fortunately there are some fairly straightforward heuristics that allow one to avoid its most serious consequences. The basic idea is to limit the size of the jumps that one would make. Indeed, one should keep in mind that when we use the gradient to make an update of the parameters, we are relying on the assumption of infinitesimal moves. There is no guarantee that making a finite step of the parameters θ in the direction of the gradient will yield an improvement. The only thing that is guaranteed is that a small enough step in that direction will be helpful. As we can see from Figure 8.3, in the presence of a cliff (and in general in the presence of very large gradients), the decrease in the objective function expected from going in the direction of the gradient is only valid for a very small step. In fact, because the objective function is usually bounded in its actual value (within a finite domain), when the gradient is large at θ, it typically only remains like this (especially, keeping its sign) in a small region around θ. Otherwise, the value of the objective function would have to change a lot: if the slope was consistently large in some direction as we would move in that direction, we would be able to decrease the objective function value by a 231
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
very large amount by following it, simply because the total change is the integral over some path of the directional derivatives along that path.
Figure 8.3: To address the presence of cliffs such as shown in Figure 8.2, a useful heuristic is to clip the magnitude of the gradient, only keeping its direction if its magnitude is above a threshold (which is a hyperparameter, although not a very critical one). This helps to avoid the destructive big moves which would happen when approaching the cliff, either from above or from below. Figure graciously provided by Razvan Pascanu (Pascanu, 2014).
The gradient clipping heuristics are described in more detail in Section 10.7.6. The basic idea is to bound the magnitude of the update step, i.e., not trust the gradient too much when it is very large in magnitude. The context in which such cliffs have been shown to arise in particular is that of recurrent neural networks, when considering long sequences, as discussed in the next section.
8.2.5
Vanishing and Exploding Gradients - An Introduction to the Issue of Learning Long-Term Dependencies
Parametrized dynamical systems such as recurrent neural networks (Chapter 10) face a particular optimization problem which is different but related to that of training very deep networks. We introduce this issue here and refer to reader to Section 10.7 for a deeper treatment along with a discussion of approaches that have been proposed to reduce this difficulty. Exploding or Vanishing Product of Jacobians The simplest explanation of the problem, which is shared among very deep nets and recurrent nets, is that in both cases the final output is the composition of a large number of non-linear transformations. Even though each of these non-linear stages may be relatively smooth (e.g. the composition of an affine transformation with a hyperbolic tangent or sigmoid), their composition is going to be much 232
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
“more non-linear”, in the sense that derivatives through the whole composition will tend to be either very small or very large, with more ups and downs. TODO: the phrase ”ups and downs” has a connotation of specifically good things and bad things happening over time, use a different phrase. this section is also sloppily conflating many different ideas, just having both areas of large derivatives and small derivatives does not mean there are lots of ups and downs, consider the function (1 − x ∗ y) 2, which has small derivatives near the origin and the global minima, much larger derivatives in between This arises simply because the Jacobian (matrix of derivatives) of a composition is the product of the Jacobians of each stage, i.e., if f = fT ◦ fT −1 ◦ . . . , f2 ◦ f 1 then the Jacobian matrix of derivatives of f (x) with respect to its input vector x is the product f 0 = f0Tf 0T −1 . . . , f20 f1 (8.1) where f0=
∂f(x) ∂x
f 0t =
∂ft (at) ∂at
and
where a t = ft−1(f t−2(. . . , f2 (f1(x)))), i.e. composition has been replaced by matrix multiplication. This is illustrated in Figure 8.4. TODO: the above sentence is incredibly long, split it up and probably put the definitions in the opposite order. It also seems strange to say composition is ”replaced” by matrix multiplication, more like composition in forward prop implies matrix multiplication in backprop
!!
…"
!!
="
Figure 8.4: When composing many non-linearities (like the activation non-linearity in a deep or recurrent neural network), the result is highly non-linear, typically with most of the values associated with a tiny derivative, some values with a large derivative, and many ups and downs (not shown here).
TODO: can we avoid using capital letters for scalars here? (T) In the scalar case, we can imagine that multiplying many numbers together tends to be either very large or very small. In the special case where all the numbers in the product 233
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
have the same value α, this is obvious, since α T goes to 0 if α < 1 and goes to ∞ if α > 1, as T increases. The more general case of non-identical numbers be understood by taking the logarithm of these numbers, considering them to be random, and computing the variance of the sum of these logarithms. Clearly, although some cancellation can happen, the variance grows with T , and in fact if those numbers are independent, the variance grows linearly with T , i.e., the √ size of the sum (which is the standard deviation) grows as T , which means that the product grows roughly as e T (consider the variance of log-normal variate X if log X is normal with mean 0 and variance T ). It would be interesting to push this analysis to the case of multiplying square matrices instead of multiplying numbers, but one might expect qualitatively similar conclusions, i.e., the size of the product somehow grows with the number of matrices, and that it grows exponentially. In the case of matrices, one can get a new form of cancellation due to leading eigenvectors being well aligned or not. The product of matrices will blow up only if, among their leading eigenvectors with eigenvalue greater than 1, there is enough “in common” (in the sense of the appropriate dot products of leading eigenvectors of one matrix and another). However, this analysis was for the case where these numbers are independent. In the case of an ordinary recurrent neural network (developed in more detail in Chapter 10), these Jacobian matrices are highly related to each other. Each layerwise Jacobian is actually the product of two matrices: (a) the recurrent matrix W and (b) the diagonal matrix whose entries are the derivatives of the non-linearities associated with the hidden units, which vary depending on the time step. This makes it likely that successive Jacobians have similar eigenvectors, making the product of these Jacobians explode or vanish even faster. Consequence for Recurrent Networks: Difficulty of Learning LongTerm Dependencies The consequence of the exponential convergence of these products of Jacobians towards either very small or very large values is that it makes the learning of long-term dependencies particularly difficult, as we explain below and was independently introduced in Hochreiter (1991) and Bengio et al. (1993, 1994) for the first time. TODO: why the capital F? Can we use lowercase instead? This also appears in rnn.tex Consider a fairly general parametrized dynamical system (which includes classical recurrent networks as a special case, as well as all their known variants), processing a sequence of inputs, x 1, . . . , xt , . . ., involving iterating over the transition operator: st = F θ (st−1, x t ) (8.2) where st is called the state of the system and Fθ is the recurrent function that 234
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
maps the previous state and current input to the next state. The state can be used to produce an output via an output function, ot = gω(s t),
(8.3)
TODO: could we avoid the capital T? and a loss Lt is computed at each time step t as a function of o t and possibly of some targets yt . Let us consider the gradient of a loss L T at time T with respect to the parameters θ of the recurrent function T using the chain rule is the Fθ . One particular way to decompose the gradient ∂L ∂θ following: X ∂LT ∂s t ∂LT = ∂θ ∂st ∂θ t≤T
X ∂LT ∂s T ∂Fθ (st−1, x t ) ∂LT = ∂θ ∂s T ∂st ∂θ
(8.4)
t≤T
where the last Jacobian matrix only accounts for the immediate effect of θ as a parameter of Fθ when computing st = F θ(st−1 , x t), i.e., not taking into account the indirect effect of θ via s t−1 (otherwise there would be double counting and the result would be incorrect). To see that this decomposition is correct, please refer to the notions of gradient computation in a flow graph introduced in Section 6.4, and note that we can construct a graph in which θ influences each st , each of T which influences L T via sT . Now let us note that each Jacobian matrix ∂s ∂s t can be decomposed as follows: ∂sT ∂s T ∂sT −1 ∂s t+1 = ... ∂s t ∂sT −1 ∂sT −2 ∂st
(8.5)
which is of the same form as Eq. 8.1 discussed above, i.e., which tends to either vanish or explode. T As a consequence, we see from Eq. 8.4 that ∂L ∂θ is a weighted sum of terms over spans T −t, with weights that are exponentially smaller (or larger) for longerterm dependencies relating the state at t to the state at T . As shown in Bengio et al. (1994), in order for a recurrent network to reliably store memories, the Jat cobians ∂s∂st−1 relating each state to the next must have a determinant that is less than 1 (i.e., yielding to the formation of attractors in the corresponding dynamical system). Hence, when the model is able to capture long-term dependencies it is also in a situation where gradients vanish and long-term dependencies have an exponentially smaller weight than short-term dependencies in the total gradient. It does not mean that it is impossible to learn, but that it might take a very long time to learn long-term dependencies, because the signal about these dependencies will tend to be hidden by the smallest fluctuations arising from 235
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
short-term dependencies. In practice, the experiments in Bengio et al. (1994) show that as we increase the span of the dependencies that need to be captured, gradient-based optimization becomes increasingly difficult, with the probability of successful learning rapidly reaching 0 after only 10 or 20 steps in the case of the vanilla recurrent net and stochastic gradient descent (Section 8.3.2). For a deeper treatment of the dynamical systems view of recurrent networks, consider Doya (1993); Bengio et al. (1994); Siegelmann and Sontag (1995), with a review in Pascanu et al. (2013a). Section 10.7 discusses various approaches that have been proposed to reduce the difficulty of learning long-term dependencies (in some cases allowing one to reach to hundreds of steps), but it remains one of the main challenges in deep learning.
8.3
Optimization Algorithms
In Sec. 6.4, we discussed the backpropagation algorithm (backprop): that is, how to efficiently compute the gradient of the loss with-respect-to the model parameters. The backpropagation algorithm does not specify how we use this gradient to update the weights of the model. In this section we introduce a number of gradient-based learning algorithms that have been proposed to optimize the parameters of deep learning models. Deep learning
8.3.1
Gradient descent is the most basic gradient-based algorithms one might apply to train a deep model. The algorithm involves updating the model parameters θ (in the case of a deep neural network, these parameters would include the weights and biases associated with each layer) with a small step in the direction of the gradient of the loss function (including any regularization terms). For the case of supervised learning with data pairs [x(t) , y(t) ] we have: θ ← θ + ∇θ
X
L(f (x (t); θ), y (t) ; θ),
(8.6)
t
where is the learning rate, an optimization hyperparameter that controls the size of the step the the parameters take in the direction of the gradient. Of course, following the gradient in this way is only guaranteed to reduce the loss in the limit as → 0. TODO: orphaned phrase on next line, was this a note for a thing to do or was it a typo? learning rates
236
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
8.3.2
One aspect of machine learning algorithms that separates them from general optimization algorithms is that the objective function usually decomposes as a sum over the training examples. Optimization algorithms for machine learning typically compute each update to the parameters based on a subset of the terms of the objective function, not based on the complete objective function itself. For example, maximum likelihood estimation problems decompose into a sum over each example: TODO equation, using same format as in original maximum likelihood section, which isn’t written yet More general, we are really interested in minimizing the expected loss with the expectation taken with respect to the data distribution, i.e. EX,Y [L(fθ(X), Y )], with X, Y ∼ P (X, Y ). As discussed in Sec. 6.2, we replace this expectation with an average over the training data (eg. for n examples): n 1X expected loss = L(f (x (t)), y (t)) n
(8.7)
t=1
This form of the loss implies that the gradient also consists of an average of the gradient contributions for each data point: n
∂ 1X expected loss = L(f (x (t)), y (t) ) ∂θ n
(8.8)
t=1
Now So we can interpret the right hand side of Eqn. 8.8 as an estimator of the gradient of the expected loss. Seen in this light, it’s reasonble to think about the properties of this estimator, such as its mean and variance. Provided that there are a relatively large number of examples in the training set, computing the gradient over all examples in the training dataset – also known as batch gradient descent – would yeild a relatively small variance on the estimate In application to training deep learning models, straightforward gradient descent – where each gradient step involves computing the gradient for all training examples – is well known to be inefficient. This is especially true when we are dealing with large datasets. TODO stochastic gradient descent TODO examples can be redundant, so best computational efficiency comes from minibatches TODO too small of batch size -¿ bad use of multicore architectures TODO issues with Hessian being different for different batches, etc. TODO importance of shuffling shuffle-once versus shuffle per epoch todo: define the term “epoch” discuss learning rates. 237
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.1 Stochastic gradient descent (SGD) update at time t Require: Learning rate η. Require: Initial parameter θ while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Set g = 0 for t = 1 to m do Compute gradient estimate: g ← g + ∇ θL(f (x(t) ; θ), y(t) ) end for Apply update: θ ← θ − ηg end while
8.3.3
Momentum
While stochastic gradient descent remains a very popular optimization strategy, learning with it can sometimes be slow. This is especially true in situations where there the gradient is small, but consistent across minibatches. From the consistency of the gradient, we know that we can afford to take larger steps in this direction, yet we have no way of really knowing when we are in such situations. The Momentum method Polyak (1964) is designed to accelerate learning, especially in the face of small and consistent gradients. The intuition behind momentum, as the name suggests, is derived from a physical interpretation of the optimization process. Imagine you have a small ball (think marble) that represents the current position in parameter space (for our purposes here we can imagine a 2-D parameter space). Now consider that the ball is on a gentle slope, while the instantaneous force pulling the ball down hill is relatively small, their contributions combine and the downhill velocity of the ball gradually begins to increase over time. The momentum method is designed to inject this kind of downhill acceleration into gradient-based optimization. TODO: explain effect of drag on the marble and why it should be linear drag Formally, we introduce a variable v that plays the role of velocity (or momentum) that accumulates gradient. The update rule is given by: ! n X 1 v ← +αv + η∇ θ L(f (x(t) ; θ), y(t) ) n t=1
θ ← θ+v
where the
238
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.2 Stochastic gradient descent (SGD) with momentum Require: Learning rate η, momentum parameter α. Require: Initial parameter θ, initial velocity v. while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Set g = 0 for t = 1 to m do Compute gradient estimate: g ← g + ∇ θL(f (x(t) ; θ), y(t) ) end for Compute velocity update: v ← αv − ηg Apply update: θ ← θ + v end while Nesterov momentum Sutskever et al. (2013) introduced a variant of the momentum algorithm that was inspired by Nesterov. ! n 1X v ← +αv + η∇θ L(f (x(t); θ + αv), y (t) ) , n t=1
θ ← θ + v,
where the parameters α and η play a similar role as in the standard momentum method. The difference betwee Nesterov momentum and standard momentum is where the gradient is evaluated. Algorithm 8.3 Stochastic gradient descent (SGD) with Nesterov momentum Require: Learning rate η, momentum parameter α. Require: Initial parameter θ, initial velocity v. while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Apply interim update: θ ← θ + αv Set g = 0 for t = 1 to m do Compute gradient (at interim point): g ← g + ∇ θ L(f (x(t) ; θ), y(t) ) end for Compute velocity update: v ← αv − ηg Apply update: θ ← θ + v end while
8.3.4
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.4 The Adagrad algorithm Require: Global learning rate η, Require: Initial parameter θ Initialize gradient accumulation variable r = 0, while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Set g = 0 for t = 1 to m do Compute gradient: g ← g + ∇ θL(f (x(t) ; θ), y (t) ) end for Accumulate gradient: r ← r + g 2 Compute update: ∆θ ← − √ηr g. % ( √1r applied element-wise) Apply update: θ ← θ + ∆θt end while
8.3.5
RMSprop
Algorithm 8.5 The RMSprop algorithm Require: Global learning rate η, decay rate ρ. Require: Initial parameter θ Initialize accumulation variables r = 0 while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Set g = 0 for t = 1 to m do Compute gradient: g ← g + ∇ θL(f (x(t) ; θ), y (t) ) end for Accumulate gradient: r ← ρr + (1 − ρ)g2 Compute parameter update: ∆θ = − √ηr g. Apply update: θ ← θ + ∆θ end while
8.3.6
8.3.7
No Pesky Learning Rates 240
%(
1 √ r
applied element-wise)
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.6 RMSprop algorithm with Nesterov momentum Require: Global learning rate η, decay rate ρ, momentum para α. Require: Initial parameter θ, initial velocity v. Initialize accumulation variable r = 0 while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Compute interim update: θ ← θ + αv Set g = 0 for t = 1 to m do Compute gradient: g ← g + ∇ θL(f (x(t) ; θ), y (t) ) end for Accumulate gradient: r ← ρr + (1 − ρ)g2 Compute velocity update: v ← αv − √ηr g. % ( √1r applied element-wise) Apply update: θ ← θ + v end while
8.4
Approximate Natural Gradient and Second-Order Methods
8.5
8.6
BFGS
TONGA and “actual” NG, links with HF.
8.6.1
New
8.6.2
Optimization Strategies and Meta-Algorithms
8.6.3
Coordinate Descent
In some cases, it may be possible to solve an optimization problem quickly by breaking it into separate pieces. If we minimize f (x) with respect to a single variable xi, then minimize it with respect to another variable x j and so on, we are guaranteed to arrive at a (local) minimum. This practice is known as coordinate descent, because we optimize one coordinate at a time. More generally, block coordinate descent refers to minimizing with respect to a subset of the variables simultaneously. The term “coordinate descent” is often used to refer to block coordinate descent as well as the strictly individual coordinate descent. 241
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.7 The Adadelta algorithm Require: Decay rate ρ, constant Require: Initial parameter θ Initialize accumulation variables r = 0, s = 0, while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Set g = 0 for t = 1 to m do Compute gradient: g ← g + ∇ θL(f (x(t) ; θ), y (t) ) end for Accumulate gradient: r ← √ ρr + (1 − ρ)g2 Compute update: ∆θ = − √ s+ g % (operations applied element-wise) r+ Accumulate update: s ← ρs + (1 − ρ) [∆θ] 2 Apply update: θ ← θ + ∆θ end while Coordinate descent makes the most sense when the different variables in the optimization problem can be clearly separated into groups that play relatively isolated roles, or when optimization with respect to one group of variables is significantly more efficient than optimization with respect to all of the variables. For example, the objective function most commonly used for sparse coding is not convex. However, we can divide the inputs to the training algorithm into two sets: the dictionary parameters and the code representations. Minimizing the objective function with respect to either one of these sets of variables is a convex problem. Block coordinate descent thus gives us an optimization strategy that allows us to use efficient convex optimization algorithms. Coordinate descent is not a very good strategy when the value of one variable strongly influences value of another variable, as in the function f (x) = 2the optimal 2 2 (x 1 − x 2) + α x1 + y1 where α is a positive constant. As α approaches 0, coordinate descent ceases to make any progress at all, while Newton’s method could solve the problem in a single step.
8.6.4
Greedy Supervised Pre-training
TODO
242
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.8 The vSGD-1 algorithm from Schaul et al. (2012) Require: Initial parameter θ0 Initialize accumulation variables q = 0, r = 0,s = 0 while Stopping criterion not met do Sample a minibatch of m examples from the training set {x(1) , . . . , x(m) }. Initialize the gradient g = 0 for t = 1 to m do Compute gradient: g ← g + ∇ θL(f (x(t) ; θ), y (t) ) end for Accumulate gradient: q ← ρq + (1 − ρ)g Accumulate squared gradient: r ← ρr + (1 − ρ)g 2 Accumulate: s ← ρs + (1 − ρ) bbprop(θ)(j) i 2
q estimate learning rate (element-wise calc.): η∗ ← sr 2 −1 Update memory size: ρ ← qr − 1 (1 − ρ) Compute update: ∆θ = −η ∗ g % All operations above should be interpreted as element-wise. Apply update: θ ← θ + ∆θ end while
8.7
Hints, Global Optimization and Curriculum Learning
Most of the work on numerical optimization for machine learning and deep learning in particular is focused on local descent, i.e., on how to locally improve the objective function efficiently. What the experiments with different initialization strategies tell us is that local descent methods can get stuck, presumably near a local minimum or near a saddle point, i.e., where gradients are small, so that the initial value of the parameters can matter a lot. As an illustration of this issue, consider the experiments reported by G¨ ul¸cehre and Bengio (2013), where a learning task is setup so that if the lower half of the deep supervised network is pre-trained with respect to an appropriate sub-task, the whole network can learn to solve the overall task, whereas random initialization almost always fails. In these experiments, we know that the overall task can be decomposed into two tasks (1) (identifying the presence of different objects in an input image and (2) verifying whether the different objects detected are of the same class or not. Each of these two tasks (object recognition, exclusive-or) are known to be learnable, but when we compose them, it is much more difficult 243
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Algorithm 8.9 Conjugate gradient method Require: Initial parameters θ 0 Initialize ρ0 = 0 while stopping criterion not met do Initialize the gradient g = 0 for t = 1 to n % loop over the training set. do Compute gradient: g ← g + ∇ θL(f (x(t) ; θ), y (t) ) end forbackpropagation) (g −g
) >g
t Compute βt = tg> t−1 (Polak — Ribi`ere) g t−1 t−1 Compute search direction: ρt = −g t + βt ρt−1 Perform line search to find: η∗ = argmin η J (θt + ηρt) Apply update: θt+1 = θ t + η∗ ρt end while
Algorithm 8.10 BFGS method Require: Initial parameters θ 0 Initialize inverse Hessian M 0 = I while stopping criterion not met do Compute gradient: g t = ∇J (θt) (via batch backpropagation) Compute φ = g t − g t−1, ∆ =θ t − θ t−1 > > φ>M t−1φ φ φ ∆φ M t−1+M t−1φ∆> −1 Approx H : M t = Mt−1 + 1 + ∆> φ − ∆ >φ ∆> φ Compute search direction: ρt = M t gt Perform line search to find: η∗ = argmin η J (θt + ηρt) Apply update: θt+1 = θ t + η∗ ρt end while to optimize the neural network (including a large variety of architectures), while other methods such as SVMs, boosting and decision trees also fail. This is an instance where the optimization difficulty was solved by introducing prior knowledge in the form of hints, specifically hints about what the intermediate layer in a deep net should be doing. We have already seen in Section 8.6.4 that a useful strategy is to ask the hidden units to extract features that are useful to the supervised task at hand, with greedy supervised pre-training. In section 16.1 we will discuss an unsupervised version of this idea, where we ask the intermediate layers to extract features that are good explaining the variations in the input, without reference to a specific supervised task. Another related line of work is the FitNets (Romero et al., 2015), where the middle layer of 5-layer supervised teacher network is used as a hint to be predicted by the middle layer of a much deeper student network (11 to 19 layers). In that case, additional parameters are intro244
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
duced to regress the middle layer of the 5-layer teacher network from the middle layer of the deeper student network. The lower layers of the student networks thus get two objectives: help the outputs of the student network accomplish their task, as well as predict the intermediate layer of the teacher network. Although a deeper network is usually more difficult to optimize, it can generalize better (it has to extract these more abstract and non-linear features). Romero et al. (2015) were motivated by the fact that a deep student network with a smaller number of hidden units per layer can have a lot less parameters (and faster computation) than a fatter shallower network and yet achieve the same or better generalization, thus allowing a trade-off between better generalization (with 3 times fewer parameters) and faster test-time computation (up to 10 fold, in the paper, using a very thin and deep network with 35 times less parameters). Without the hints on the hidden layer, the student network performed very poorly in the experiments, both on the training and test set. These drastic effects of initialization and hints to middle layers bring forth the question of what is sometimes called global optimization (Horst et al., 2000), the main subject of this section. The objective of global optimization methods is to find better solutions than local descent minimizers, i.e., ideally find a global minimum of the objective function and not simply a local minimum. If one could restart a local optimization method from a very large number of initial conditions, one could imagine that the global minimum could be found, but there are more efficient approaches. Two fairly general approaches to global optimization are continuation methods (Wu, 1997), a deterministic approach, and simulated annealing (Kirkpatrick et al., 1983), a stochastic approach. They both proceed from the intuition that if we sufficiently blur a non-convex objective function (e.g. convolve it with a Gaussian) whose global minima arae not at infinite values, then it becomes convex, and finding the global optimum of that blurred objective function should be much easier. As illustrated in Figure 8.5, by gradually changing the objective function from a very blurred easy to optimize version to the original crisp and difficult objective function, we are actually likely to find better local minima. In the case of simulated annealing, the blurring occurs because of injecting noise. With injected noise, the state of the system can sometimes go uphill, and thus does not necessarily get stuck in a local minimum. With a lot of noise, the effective objective function (averaged over the noise) is flatter and convex, and if the amount of noise is reduced sufficiently slowly, then one can show convergence to the global minimum. However, the annealing schedule (the rate at which the noise level is decreased, or equivalently the temperature is decreased when you think of the physical annealing analogy) might need to be extremely slow, so an NP-hard optimization problem remains NP-hard. 245
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
Final solution
Track local minima
Easy to find minimum
Figure 8.5: Optimization based on continuation methods: start by optimizing a smoothed out version of the target objective function (possibly convex), then gradually reduce the amount of smoothing while tracking the local optimum. This approach tends to find better local minima than a straight local descent approach on the target objective function. Curriculum learning (starting from easy examples and gradually introducing with higher probability more difficult examples) can be justified under that light (Bengio et al., 2009).
Continuation methods have been extremely successful in recent years: see a recent overview of recent literature, especially for AI applications in Mobahi and Fisher III (2015). Continuation methods define a family of objective functions, indexed by a single scalar index λ, with an easy to optimize objective function at one end (usually convex, say λ = 1) and the target objective at the other end (say λ = 0). The idea is to first find the solution for the easy problem (λ = 1) and then gradually decrease λ towards the more difficult objectives, while tracking the minimum. Curriculum learning (Bengio et al., 2009) was introduced as a general strategy for machine learning that is inspired by how humans learn, starting by learning to solve simple tasks, and then exploiting what has been learned to learn slightly more difficult and abstract tasks, etc. It was justified as a continuation method (Bengio et al., 2009) in the context of deep learning, where it was previously observed that the optimization problem can be challenging. Experiments showed that better results could be obtained by following a curriculum, in particular on a large-scale neural language modeling task. One view on curriculum learning introduced in that paper is that a particular intermediate objective function corresponds to a reweighing on the examples: initially the easy to learn examples are given more weights or a higher probability, and harder examples 246
CHAPTER 8. OPTIMIZATION FOR TRAINING DEEP MODELS
see their weight or probability gradually increased as the learner gets sufficiently ready to learn them. The idea of curriculum learning to help train difficult to optimize models has been taken up successfully not only in natural language tasks (Spitkovsky et al., 2010; Collobert et al., 2011a; Mikolov et al., 2011b; Tu and Honavar, 2011) but also in computer vision (Kumar et al., 2010; Lee and Grauman, 2011; Supancic and Ramanan, 2013). It was also found to be consistent with the way in which humans teach (Khan et al., 2011): they start by showing easier and more prototypical examples and then help the learner refine the decision surface with the less obvious cases. In agreement with this, it was found that such strategies are more effective when teaching to humans (Basu and Christensen, 2013). Another important contribution to research on curriculum learning arose in the context of training recurrent neural networks to capture long-term dependencies (Zaremba and Sutskever, 2014): it was found that much better results were obtained with a stochastic curriculum, in which a random mix of easy and difficult examples is always presented to the learner, but where the average proportion of the more difficult examples (here, those with longer-term dependencies) is gradually increased. Instead, with a deterministic curriculum, no improvement over the baseline (ordinary training from the fully training set) was observed.
247
Chapter 9
Convolutional Networks Convolutional networks (also known as convolutional neural networks or CNNs) are a specialized kind of neural network for processing data that has a known, grid-like topology. Examples include time-series data, which can be thought of as a 1D grid taking samples at regular time intervals, and image data, which can be thought of as a 2D grid of pixels. Convolutional networks have been tremendously successful in practical applications (the specifics of several of these applications will be explained in Chapter 12.2.2). The name “convolutional neural network” indicates that the network employs a mathematical operation called convolution. Convolution is a specialized kind of linear operation. Convolutional networks are simply neural networks that use convolution in place of general matrix multiplication in at least one of their layers. KEY In this chapter, we will first describe what convolution is. Next, we will IDEA explain the motivation behind using convolution in a neural network. We will then describe an operation called pooling, which almost all convolutional networks employ. Usually, the operation used in a convolutional neural network does not correspond precisely to the definition of convolution as used in other fields such as engineering or pure mathematics. We will describe several variants on the convolution function that are widely used in practice for neural networks. We will also show how convolution may be applied to many kinds of data, with different numbers of dimensions. We then discuss means of making convolution more efficient. We conclude with comments about the role convolutional networks have played in the history of deep learning.
9.1
The Convolution Operation
In its most general form, convolution is an operation on two functions of a realvalued argument. To motivate the definition of convolution, let’s start with ex248
CHAPTER 9. CONVOLUTIONAL NETWORKS
amples of two functions we might use. Suppose we are tracking the location of a spaceship with a laser sensor. Our laser sensor provides a single output x(t), the position spaceship at time t. Both x and t are real-valued, i.e., we can get a different reading from the laser sensor at any instant in time. Now suppose that our laser sensor is somewhat noisy. To obtain a less noisy estimate of the spaceship’s position, we would like to average together several measurements. Of course, more recent measurements are more relevant, so we will want this to be a weighted average that gives more weight to recent measurements. We can do this with a weighting function w(a), where a is the age of a measurement. If we apply such a weighted average operation at every moment, we obtain a new function s providing a smoothed estimate of the position of the spaceship: Z s(t) = x(a)w(t − a)da This operation is called convolution. The convolution operation is typically denoted with an asterisk: s(t) = (x ∗ w)(t)
In our example, w needs to be a valid probability density function, or the output is not a weighted average. Also, w needs to be 0 for all negative arguments, or it will look into the future, which is presumably beyond our capabilities. These limitations are particular to our example though. In general, convolution is defined for any functions for which the above integral is defined, and may be used for other purposes besides taking weighted averages. In convolutional network terminology, the first argument (in this example, the function x) to the convolution is often referred to as the input and the second argument (in this example, the function w) as the kernel. The output is sometimes referred to as the feature map. In our example, the idea of a laser sensor that can provide measurements at every instant in time is not realistic. Usually, when we work with data on a computer, time will be discretized, and our sensor will provide data at regular intervals. In our example, it might be more realistic to assume that our laser provides one measurement once per second. t can then take on only integer values. If we now assume that x and w are defined only on integer t, we can define the discrete convolution: s[t] = (x ∗ w)(t) =
∞ X
a=−∞
x[a]w[t − a]
TODO: synch w/Yoshua and Aaron about how to handle this kind of indexing. Add an integer-domain function line to notation.tex? 249
CHAPTER 9. CONVOLUTIONAL NETWORKS
In machine learning applications, the input is usually a multidimensional array of data and the kernel is usually a multidimensional array of learn-able parameters. We will refer to these multidimensional arrays as tensors. Because each element of the input and kernel must be explicitly stored separately, we usually assume that these functions are zero everywhere but the finite set of points for which we store the values. This means that in practice we can implement the infinite summation as a summation over a finite number of array elements. Finally, we often use convolutions over more than one axis at a time. For example, if we use a two-dimensional image I as our input, we probably also want to use a two-dimensional kernel K: XX s[i, j] = (I ∗ K)[i, j] = I[m, n]K[i − m, j − n] m
n
Note that convolution is commutative, meaning we can equivalently write: XX s[i, j] = (I ∗ K)[i, j] = I[i − m, j − n]K[m, n] m
n
Usually the latter view is more straightforward to implement in a machine learning library, because there is less variation in the range of valid values of m and n. While the commutative property is useful for writing proofs, it is not usually an important property of a neural network implementation. Instead, many neural network libraries implement a related function called the cross-correlation, which is the same as convolution but without flipping the kernel: XX s[i, j] = (I ∗ K)[i, j] = I[i + m, j + n]K[m, n] m
n
Many machine learning libraries implement cross-correlation but call it convolution. In this text we will follow this convention of calling both operations convolution, and specify whether we mean to flip the kernel or not in contexts where kernel flipping is relevant. See Fig. 9.1 for an example of convolution (without kernel flipping) applied to a 2-D tensor. Discrete convolution can be viewed as multiplication by a matrix. However, the matrix has several entries constrained to be equal to other entries. For example, for univariate discrete convolution, each row of the matrix is constrained to be equal to the row above shifted by one element. This is known as a Toeplitz matrix. In two dimensions, a doubly block circulant matrix corresponds to convolution. In addition to these constraints that several elements be equal to each other, convolution usually corresponds to a very sparse matrix (a matrix whose entries are 250
CHAPTER 9. CONVOLUTIONAL NETWORKS
Input a
e
i
b
f
j
c
d
g
Kernel w
x
y
z
h
k
l
Output aw + bx + ey + fz
bw + cx + fy + gz
cw + dx + gy + hz
ew + fx + iy + jz
fw + gx + jy + kz
gw + hx + ky + lz
Figure 9.1: An example of 2-D convolution without kernel-flipping. In this case we restrict the output to only positions where the kernel lies entirely within the image, called “valid” convolution in some contexts. We draw boxes with arrows to indicate how the upperleft element of the output tensor is formed by applying the kernel to the corresponding upper-left region of the input tensor.
251
CHAPTER 9. CONVOLUTIONAL NETWORKS
mostly equal to zero). This is because the kernel is usually much smaller than the input image. Viewing convolution as matrix multiplication usually does not help to implement convolution operations, but it is useful for understanding and designing neural networks. Any neural network algorithm that works with matrix multiplication and does not depend on specific properties of the matrix structure should work with convolution, without requiring any further changes to the neural network. Typical convolutional neural networks do make use of further specializations in order to deal with large inputs efficiently, but these are not strictly necessary from a theoretical perspective.
9.2
Motivation
Convolution leverages three important ideas that can help improve a machine learning system: sparse interactions, parameter sharing, and equivariant representations. Moreover, convolution provides a means for working with inputs of variable size. We now describe each of these ideas in turn. Traditional neural network layers use a matrix multiplication to describe the interaction between each input unit and each output unit. This means every output unit interacts with every input unit. Convolutional networks, however, typically have sparse interactions (also referred to as sparse connectivity or sparse weights). This is accomplished by making the kernel smaller than the input. For example, when processing an image, the input image might have thousands or millions of pixels, but we can detect small, meaningful features such as edges with kernels that occupy only tens or hundreds of pixels. This means that we need to store fewer parameters, which both reduces the memory requirements of the model and improves its statistical efficiency. It also means that computing the output requires fewer operations. These improvements in efficiency are usually quite large. If there are m inputs and n outputs, then matrix multiplication requires m × n parameters and the algorithms used in practice have O(m × n) runtime (per example). If we limit the number of connections each output may have to k, then the sparsely connected approach requires only k × n parameters and O(k × n) runtime. For many practical applications, it is possible to obtain good performance on the machine learning task while keeping k several orders of magnitude smaller than m. For graphical demonstrations of sparse connectivity, see Fig. 9.2 and Fig. 9.3. In a deep convolutional network, units in the deeper layers may indirectly interact with a larger portion of the input, as shown in Fig. 9.4. This allows the network to efficiently describe complicated interactions between many variables by constructing such interactions from simple building blocks that each describe only sparse interactions. Parameter sharing refers to using the same parameter for more than one func252
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.2: Sparse connectivity, viewed from below: We highlight one input unit, X3 , and also highlight the output units in S that are affected by this unit. (Left) When S is formed by convolution with a kernel of width 3, only three outputs are affected by X3 . (Right) When S is formed by matrix multiplication, connectivity is no longer sparse, so all of the outputs are affected by X 3 . TODO: make sure fig uses latest notation
Figure 9.3: Sparse connectivity, viewed from above: We highlight one output unit, S 3 , and also highlight the input units in X that affect this unit. These units are known as the receptive field of S3 . (Left) When S is formed by convolution with a kernel of width 3, only three inputs affect S3 . (Right) When S is formed by matrix multiplication, connectivity is no longer sparse, so all of the inputs affect S 3 . TODO: make sure fig uses latest notation
253
CHAPTER 9. CONVOLUTIONAL NETWORKS
g1
g2
g3
g4
g5
h1
h2
h3
h4
h5
x1
x2
x3
x4
x5
Figure 9.4: The receptive field of the units in the deeper layers of a convolutional network is larger than the receptive field of the units in the shallow layers. This effect increases if the network includes architectural features like strided convolution or pooling. This means that even though direct connections in a convolutional net are very sparse, units in the deeper layers can be indirectly connected to all or most of the input image. 254
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.5: Parameter sharing: We highlight the connections that use a particular parameter in two different models. (Left) We highlight uses of the central element of a 3-element kernel in a convolutional model. Due to parameter sharing, this single parameter is used at all input locations. (Right) We highlight the use of the central element of the weight matrix in a fully connected model. This model has no parameter sharing so the parameter is used only once.
tion in a model. In a traditional neural net, each element of the weight matrix is used exactly once when computing the output of a layer. It is multiplied by one element of the input, and then never revisited. As a synonym for parameter sharing, one can say that a network has tied weights, because the value of the weight applied to one input is tied to the value of a weight applied elsewhere. In a convolutional neural net, each member of the kernel is used at every position of the input (except perhaps some of the boundary pixels, depending on the design decisions regarding the boundary). The parameter sharing used by the convolution operation means that rather than learning a separate set of parameters for every location, we learn only one set. This does not affect the runtime of forward propagation–it is still O(k×n)–but it does further reduce the storage requirements of the model to k parameters. Recall that k is usual several orders of magnitude less than m. Since m and n are usually roughly the same size, k is practically insignificant compared to m × n. Convolution is thus dramatically more efficient than dense matrix multiplication in terms of the memory requirements and statistical efficiency. For a graphical depiction of how parameter sharing works, see Fig. 9.5. As an example of both of these first two principles in action, Fig. 9.6 shows how sparse connectivity and parameter sharing can dramatically improve the efficiency of a linear function for detecting edges in an image. In the case of convolution, the particular form of parameter sharing causes the layer to have a property called equivariance to translation. To say a function is equivariant means that if the input changes, the output changes in the same way. 255
CHAPTER 9. CONVOLUTIONAL NETWORKS
Specifically, a function f (x) is equivariant to a function g if f(g(x)) = g(f (x)). In the case of convolution, if we let g be any function that translate the input, i.e., shifts it, then the convolution function is equivariant to g. For example, define g(x) such that for all i, g(x)[i] = x[i − 1]. This shifts every element of x one unit to the right. If we apply this transformation to x, then apply convolution, the result will be the same as if we applied convolution to x, then applied the transformation to the output. When processing time series data, this means that convolution produces a sort of timeline that shows when different features appear in the input. If we move an event later in time in the input, the exact same representation of it will appear in the output, just later in time. Similarly with images, convolution creates a 2-D map of where certain features appear in the input. If we move the object in the input, its representation will move the same amount in the output. This is useful for when we know that same local function is useful everywhere in the input. For example, when processing images, it is useful to detect edges in the first layer of a convolutional network, and an edge looks the same regardless of where it appears in the image. This property is not always useful. For example, if we want to recognize a face, some portion of the network needs to vary with spatial location, because the top of a face does not look the same as the bottom of a face–the part of the network processing the top of the face needs to look for eyebrows, while the part of the network processing the bottom of the face needs to look for a chin. Note that convolution is not equivariant to some other transformations, such as changes in the scale or rotation of an image. Other mechanisms are necessary for handling these kinds of transformations. Finally, some kinds of data cannot be processed by neural networks defined by matrix multiplication with a fixed-shape matrix. Convolution enables processing of some of these kinds of data. We discuss this further in section 9.8.
9.3
Pooling
A typical layer of a convolutional network consists of three stages (see Fig. 9.7). In the first stage, the layer performs several convolutions in parallel to produce a set of presynaptic activations. In the second stage, each presynaptic activation is run through a nonlinear activation function, such as the rectified linear activation function. This stage is sometimes called the detector stage. In the third stage, we use a pooling function to modify the output of the layer further. A pooling function replaces the output of the net at a certain location with a summary statistic of the nearby outputs. For example, the max pooling operation reports the maximum output within a rectangular neighborhood. Other popular pooling functions include the average of a rectangular neighborhood, the L2 norm 256
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.6: Efficiency of edge detection. The image on the right was formed by taking each pixel in the original image and subtracting the value of its neighboring pixel on the left. This shows the strength of all of the vertically oriented edges in the input image, which can be a useful operation for object detection. Both images are 280 pixels tall. The input image is 320 pixels wide while the output image is 319 pixels wide. This transformation can be described by a convolution kernel containing 2 elements, and requires 319 × 280 × 3 = 267, 960 floating point operations (two multiplications and one addition per output pixel) to compute. To describe the same transformation with a matrix multiplication would take 320 × 280 × 319 × 280, or over 8 billion, entries in the matrix, making convolution 4 billion times more efficient for representing this transformation. The straightforward matrix multiplication algorithm performs over 16 billion floating point operations, making convolution roughly 60,000 times more efficient computationally. Of course, most of the entries of the matrix would be zero. If we stored only the nonzero entries of the matrix, then both matrix multiplication and convolution would require the same number of floating point operations to compute. The matrix would still need to contain 2 × 319 × 280 = 178, 640 entries. Convolution is an extremely efficient way of describing transformations that apply the same linear transformation of a small, local region across the entire input. (Photo credit: Paula Goodfellow)
257
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.7: The components of a typical convolutional neural network layer. There are two commonly used sets of terminology for describing these layers. Left) In this terminology, the convolutional net is viewed as a small number of relatively complex layers, with each layer having many “stages.” In this terminology, there is a one-to-one mapping between kernel tensors and network layers. In this book we generally use this terminology. Right) In this terminology, the convolutional net is viewed as a larger number of simple layers; every step of processing is regarded as a layer in its own right. This means that not every “layer” has parameters.
258
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.8: Max pooling introduces invariance. Left: A view of the middle of the output of a convolutional layer. The bottom row shows outputs of the nonlinearity. The top row shows the outputs of max pooling, with a stride of 1 between pooling regions and a pooling region width of 3. Right: A view of the same network, after the input has been shifted to the right by 1 pixel. Every value in the bottom row has changed, but only half of the values in the top row have changed, because the max pooling units are only sensitive to the maximum value in the neighborhood, not its exact location.
of a rectangular neighborhood, or a weighted average based on the distance from the central pixel. In all cases, pooling helps to make the representation become invariant to small translations of the input. This means that if we translate the input by a small amount, the values of most of the pooled outputs do not change. See Fig. 9.8 for an example of how this works. Invariance to local translation can be a very useful property if we care more about whether some feature is present than exactly where it is. For example, when determining KEY whether an image contains a face, we need not know the location of the eyes with IDEA pixel-perfect accuracy, we just need to know that there is an eye on the left side of the face and an eye on the right side of the face. In other contexts, it is more important to preserve the location of a feature. For example, if we want to find a corner defined by two edges meeting at a specific orientation, we need to preserve the location of the edges well enough to test whether they meet. The use of pooling can be viewed as adding an infinitely strong prior that the function the layer learns must be invariant to small translations. When this assumption is correct, it can greatly improve the statistical efficiency of the network. Pooling over spatial regions produces invariance to translation, but if we pool over the outputs of separately parametrized convolutions, the features can learn which transformations to become invariant to (see Fig. 9.9). Because pooling summarizes the responses over a whole neighborhood, it is possible to use fewer pooling units than detector units, by reporting summary statistics for pooling regions spaced k pixels apart rather than 1 pixel apart. See Fig. 9.10 for an example. This improves the computational efficiency of the network because the next layer has roughly k times fewer inputs to process. When 259
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.9: Example of learned invariances: If each of these filters drive units that appear in the same max-pooling region, then the pooling unit will detect “5”s in any rotation. By learning to have each filter be a different rotation of the “5” template, this pooling unit has learned to be invariant to rotation. This is in contrast to translation invariance, which is usually achieved by hard-coding the net to pool over shifted versions of a single learned filter.
1.
0.1
1.
0.2
0.2
0.1
0.1
0.0
0.1
Figure 9.10: Pooling with downsampling. Here we use max-pooling with a pool width of 3 and a stride between pools of 2. This reduces the representation size by a factor of 2, which reduces the computational and statistical burden on the next layer. Note that the final pool has a smaller size, but must be included if we do not want to ignore some of the detector units.
the number of parameters in the next layer is a function of its input size (such as when the next layer is fully connected and based on matrix multiplication) this reduction in the input size can also result in improved statistical efficiency and reduced memory requirements for storing the parameters. TODO: figure resembling http://deeplearning.net/tutorial/lenet.html#the-full-m e.g. show a representative example of a net with multiple layers, different numbers of filters at each layer, different spatial sizes as you go deeper For many tasks, pooling is essential for handling inputs of varying size. For example, if we want to classify images of variable size, the input to the classification layer must have a fixed size. This is usually accomplished by varying the size of and offset between pooling regions so that the classification layer always 260
CHAPTER 9. CONVOLUTIONAL NETWORKS
receives the same number of summary statistics regardless of the input size. For example, the final pooling layer of the network may be defined to output four sets of summary statistics, one for each quadrant of an image, regardless of the image size. TODO: add figure showing a classifier network with a fully connected layer, and then one with global average pooling. Some theoretical work gives guidance as to which kinds of pooling one should use in various situations (Boureau et al., 2010). It is also possible to dynamically pool features together, for example, by running a clustering algorithm on the locations of interesting features (Boureau et al., 2011). This approach yields a different set of pooling regions for each image. Another approach is to learn a single pooling structure that is then applied to all images (Jia et al., 2012). Pooling can complicate some kinds of neural network architectures that use top-down information, such as Boltzmann machines and autoencoders. These issues will be discussed further when we present these types of networks. Pooling in convolutional Boltzmann machines is presented in Chapter 20.7. The inverselike operations on pooling units needed in some differentiable networks will be covered in Chapter 20.9.6.
9.4
Convolution and Pooling as an Infinitely Strong Prior
Recall the concept of a prior probability distribution from Chapter 5.3. This is a probability distribution over the parameters of a model that encodes our beliefs about what models are reasonable, before we have seen any data. Priors can be considered weak or strong depending on how concentrated the probability density in the prior is. A weak prior is a prior distribution with high entropy, such a Gaussian distribution with high variance. Such a prior allows the data to move the parameters more or less freely. A strong prior has very low entropy, such as a Gaussian distribution with low variance. Such a prior plays a more active role in determining where the parameters end up. An infinitely strong prior places zero probability on some parameters and says that these parameter values are completely forbidden, regardless of how much support the data gives to those values. We can imagine a convolutional net as being similar to a fully connected net, but with an infinitely strong prior over its weights. This infinitely strong prior says that the weights for one hidden unit must be identical to the weights of its neighbor, but shifted in space. The prior also says that the weights must be zero, except for in the small, spatially contiguous receptive field assigned to that hidden unit. Overall, we can think of the use of convolution as introducing an infinitely strong prior probability distribution over the parameters of a layer. This prior 261
CHAPTER 9. CONVOLUTIONAL NETWORKS
says that the function the layer should learn contains only local interactions and is equivariant to translation. Likewise, the use of pooling is in infinitely strong prior that each unit should be invariant to small translations. Of course, implementing a convolutional net as a fully connected net with an infinitely strong prior would be extremely computationally wasteful. But thinking of a convolutional net as a fully connected net with an infinitely strong prior can give us some insights into how convolutional nets work. This view of convolution and pooling as an infinitely strong prior gives a few insights into how convolution and pooling work. One key insight is that convolution and pooling can cause underfitting. Like any prior, convolution and pooling are only useful when the assumptions made by the prior are reasonably accurate. If a task relies on preserving precision spatial information, then using pooling on all features can cause underfitting. (Some convolution network architectures (Szegedy et al., 2014a) are designed to use pooling on some channels but not on other channels, in order to get both highly invariant features and features that will not underfit when the translation invariance prior is incorrect) When a task involves incorporating information from very distant locations in the input, then the prior imposed by convolution may be inappropriate. Another key insight from this view is that we should only compare convolutional models to other convolutional models in benchmarks of statistical learning performance. Models that do not use convolution would be able to learn even if we permuted all of the pixels in the image. For many image datasets, there are separate benchmarks for models that are permutation invariant and must discover the concept of topology via learning, and models that have the knowledge of spatial relationships hard-coded into them by their designer.
9.5
Variants of the Basic Convolution Function
When discussing convolution in the context of neural networks, we usually do not refer exactly to the standard discrete convolution operation as it is usually understood in the mathematical literature. The functions used in practice differ slightly. Here we describe these differences in detail, and highlight some useful properties of the functions used in neural networks. First, when we refer to convolution in the context of neural networks, we usually actually mean an operation that consists of many applications of convolution in parallel. This is because convolution with a single kernel can only extract one kind of feature, albeit at many spatial locations. Usually we want each layer of our network to extract many kinds of features, at many locations. Additionally, the input is usually not just a grid of real values. Rather, it is a 262
CHAPTER 9. CONVOLUTIONAL NETWORKS
grid of vector-valued observations. For example, a color image has a red, green, and blue intensity at each pixel. In a multilayer convolutional network, the input to the second layer is the output of the first layer, which usually has the output of many different convolutions at each position. When working with images, we usually think of the input and output of the convolution as being 3-D tensors, with one index into the different channels and two indices into the spatial coordinates of each channel. (Software implementations usually work in batch mode, so they will actually use 4-D tensors, with the fourth axis indexing different examples in the batch) Note that because convolutional networks usually use multi-channel convolution, the linear operations they are based on are not guaranteed to be commutative, even if kernel-flipping is used. These multi-channel operations are only commutative if each operation has the same number of output channels as input channels. Assume we have a 4-D kernel tensor K with element Ki,j,k,l giving the connection strength between a unit in channel i of the output and a unit in channel j of the input, with an offset of k rows and l columns between the output unit and the input unit. Assume our input consists of observed data V with element V i,j,k giving the value of the input unit within channel i at row j and column k. Assume our output consists of Z with the same format as V. If Z is produced by convolving K across V without flipping K, then Zi,j,k =
X
Vl,j+m,k+n Ki,l,m,n
l,m,n
where the summation over l, m, and n is over all values for which the tensor indexing operations inside the summation is valid. We may also want to skip over some positions of the kernel in order to reduce the computational cost (at the expense of not extracting our features as finely). We can think of this as downsampling the output of the full convolution function. If we want to sample only every s pixels in each direction in the output, then we can defined a downsampled convolution function c such that: X Zi,j,k = c(K, V, s)i,j,k = [Vl,j×s+m,k×s+n Ki,l,m,n ] . (9.1) l,m,n
We refer to s as the stride of this downsampled convolution. It is also possible to define a separate stride for each direction of motion. TODO add a figure for this One essential feature of any convolutional network implementation is the ability to implicitly zero-pad the input V in order to make it wider. Without this feature, the width of the representation shrinks by the kernel width - 1 at each 263
CHAPTER 9. CONVOLUTIONAL NETWORKS
264
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.11: The effect of zero padding on network size: Consider a convolutional network with a kernel of width six at every layer. In this example, do not use any pooling, so only the convolution operation itself shrinks the network size. Top) In this convolutional network, we do not use any implicit zero padding. This causes the representation to shrink by five pixels at each layer. Starting from an input of sixteen pixels, we are only able to have three convolutional layers, and the last layer does not ever move the kernel, so arguably only two of the layers are truly convolutional. The rate of shrinking can be mitigated by using smaller kernels, but smaller kernels are less expressive and some shrinking is inevitable in this kind of architecture. Bottom) By adding five implicit zeroes to each layer, we prevent the representation from shrinking with depth. This allows us to make an arbitrarily deep convolutional network.
265
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.12: TODO
is then given by Zi,j,k =
X
[V l,j+m,k+n wi,j,k,l,m,n] .
l,m,n
This is sometimes also called unshared convolution, because it is a similar operation to discrete convolution with a small kernel, but without sharing parameters across locations. TODO: Mehdi asks for a local convolution figure, showing layers in 1D topology and comparing it to fully connected layer Locally connected layers are useful when we know that each feature should be a function of a small part of space, but there is no reason to think that the same feature should occur across all of space. For example, if we want to tell if an image is a picture of a face, we only need to look for the mouth in the bottom half of the image. It can also be useful to make versions of convolution or locally connected layers in which the connectivity is further restricted, for example to constraint that each output channel i be a function of only a subset of the input channels l. TODO: explain more, this paragraph just kind of dies. include a figure Tiled convolution (Gregor and LeCun, 2010; Le et al., 2010) offers a compromise between a convolutional layer and a locally connected layer. Rather than learning a separate set of weights at every spatial location, we learn a set of kernels that we rotate through as we move through space. This means that immediately neighboring locations will have different filters, like in a locally connected layer, but the memory requirements for storing the parameters will increase only by a factor of the size of this set of kernels, rather than the size of the entire output feature map. See Fig. 9.12 for a graphical depiction of tiled convolution. To define tiled convolution algebraically, let k be a 6-D tensor, where two of the dimensions correspond to different locations in the output map. Rather than having a separate index for each location in the output map, output locations cycle through a set of t different choices of kernel stack in each direction. If t is equal to the output width, this is the same as a locally connected layer. Z i,j,k =
X
Vl,j+m,k+nKi,l,m,n,j%t,k%t
l,m,n
It is straightforward to generalize this equation to use a different tiling range for each dimension. Both locally connected layers and tiled convolutional layers have an interesting interaction with max-pooling: the detector units of these layers are driven by 266
CHAPTER 9. CONVOLUTIONAL NETWORKS
different filters. If these filters learn to detect different transformed versions of the same underlying features, then the max-pooled units become invariant to the learned transformation (see Fig. 9.9). Convolutional layers are hard-coded to be invariant specifically to translation. Other operations besides convolution are usually necessary to implement a convolutional network. To perform learning, one must be able to compute the gradient with respect to the kernel, given the gradient with respect to the outputs. In some simple cases, this operation can be performed using the convolution operation, but many cases of interest, including the case of stride greater than 1, do not have this property. Convolution is a linear operation and can thus be described as a matrix multiplication (if we first reshape the input tensor into a flat vector). The matrix involved is a function of the convolution kernel. The matrix is sparse and each element of the kernel is copied to several elements of the matrix. It is not usually practical to implement convolution in this way, but it can be conceptually useful to think of it in this way. Multiplication by the transpose of the matrix defined by convolution is also a useful operation. This is the operation needed to backpropagate error derivatives through a convolutional layer, so it is needed to train convolutional networks that have more than one hidden layer. This same operation is also needed to compute the reconstruction in a convolutional autoencoder (or to perform the analogous role in a convolutional RBM, sparse coding model, etc.). Like the kernel gradient operation, this input gradient operation can be implemented using a convolution in some cases, but in the general case requires a third operation to be implemented. Care must be taken to coordinate this transpose operation with the forward propagation. The size of the output that the transpose operation should return depends on the zero padding policy and stride of the forward propagation operation, as well as the size of the forward propagation’s output map. In some cases, multiple sizes of input to forward propagation can result in the same size of output map, so the transpose operation must be explicitly told what the size of the original input was. It turns out that these three operations–convolution, backprop from output to weights, and backprop from output to inputs–are sufficient to compute all of the gradients needed to train any depth of feedforward convolutional network, as well as to train convolutional networks with reconstruction functions based on the transpose of convolution. See (Goodfellow, 2010) for a full derivation of the equations in the fully general multi-dimensional, multi-example case. To give a sense of how these equations work, we present the two dimensional, single example version here. Suppose we want to train a convolutional network that incorporates strided 267
CHAPTER 9. CONVOLUTIONAL NETWORKS
convolution of kernel stack K applied to multi-channel image V with stride s is defined by c(K, V, s) as in equation 9.1. Suppose we want to minimize some loss function J(V, K). During forward propagation, we will need to use c itself to output Z, which is then propagated through the rest of the network and used to compute J. . During backpropagation, we will receive a tensor G such that Gi,j,k = ∂Z∂i,j,k J(V, K). To train the network, we need to compute the derivatives with respect to the weights in the kernel. To do so, we can use a function g(G, V, s) i,j,k,l =
X ∂ J(V, K) = Gi,m,n Vj,m×s+k,n×s+l . ∂Ki,j,k,l m,n
If this layer is not the bottom layer of the network, we’ll need to compute the gradient with respect to V in order to backpropagate the error farther down. To do so, we can use a function h(K, G, s)i,j,k =
∂ J(V, K) = ∂V i,j,k
X
X
X
Kq,i,m,p Gi,l,n.
l,m|s×l+m=j n,p|s×n+p=k q
We could also use h to define the reconstruction of a convolutional autoencoder, or the probability distribution over visible given hidden units in a convolutional RBM or sparse coding model. Suppose we have hidden units H in the same format as Z and we define a reconstruction R = h(K, H, s). In order to train the autoencoder, we will receive the gradient with respect to R as a tensor E. To train the decoder, we need to obtain the gradient with respect to K. This is given by g(H, E, s). To train the encoder, we need to obtain the gradient with respect to H. This is given by c(K, E, s). It is also possible to differentiate through g using c and h, but these operations are not needed for the backpropagation algorithm on any standard network architectures. Generally, we do not use only a linear operation in order to transform from the inputs to the outputs in a convolutional layer. We generally also add some bias term to each output before applying the nonlinearity. This raises the question of how to share parameters among the biases. For locally connected layers it is natural to give each unit its own bias, and for tiled convolution, it is natural to share the biases with the same tiling pattern as the kernels. For convolutional layers, it is typical to have one bias per channel of the output and share it across all locations within each convolution map. However, if the input is of known, fixed size, it is also possible to learn a separate bias at each location of the output map. Separating the biases may slightly reduce the statistical efficiency of the 268
CHAPTER 9. CONVOLUTIONAL NETWORKS
model, but also allows the model to correct for differences in the image statistics at different locations. For example, when using implicit zero padding, detector units at the edge of the image receive less total input and may need larger biases.
9.6
Structured Outputs
TODO show diagram of an exclusively convolutional net, like for image inpainting or segmentation (Farabet? Collobert?)
9.7
Convolutional Modules
TODO history of ReLU -¿ maxout -¿ NIN -¿ inception
9.8
Data Types
The data used with a convolutional network usually consists of several channels, each channel being the observation of a different quantity at some point in space or time. See Table 9.1 for examples of data types with different dimensionalities and number of channels. So far we have discussed only the case where every example in the train and test data has the same spatial dimensions. One advantage to convolutional networks is that they can also process inputs with varying spatial extents. These kinds of input simply cannot be represented by traditional, matrix multiplicationbased neural networks. This provides a compelling reason to use convolutional networks even when computational cost and overfitting are not significant issues. For example, consider a collection of images, where each image has a different width and height. It is unclear how to apply matrix multiplication. Convolution is straightforward to apply; the kernel is simply applied a different number of times depending on the size of the input, and the output of the convolution operation scales accordingly. Sometimes the output of the network is allowed to have variable size as well as the input, for example if we want to assign a class label to each pixel of the input. In this case, no further design work is necessary. In other cases, the network must produce some fixed-size output, for example if we want to assign a single class label to the entire image. In this case we must make some additional design steps, like inserting a pooling layer whose pooling regions scale in size proportional to the size of the input, in order to maintain a fixed number of pooled outputs. Note that the use of convolution for processing variable sized inputs only makes sense for inputs that have variable size because they contain varying amounts of observation of the same kind of thing–different lengths of recordings over time, 269
CHAPTER 9. CONVOLUTIONAL NETWORKS
1-D
2-D
3-D
Single channel Audio waveform: The axis we convolve over corresponds to time. We discretize time and measure the amplitude of the waveform once per time step.
Audio data that has been preprocessed with a Fourier transform: We can transform the audio waveform into a 2D tensor with different rows corresponding to different frequencies and different columns corresponding to different points in time. Using convolution in the time makes the model equivariant to shifts in time. Using convolution across the frequency axis makes the model equivariant to frequency, so that the same melody played in a different octave produces the same representation but at a different height in the network’s output. Volumetric data: A common source of this kind of data is medical imaging technology, such as CT scans.
Multi-channel Skeleton animation data: Animations of 3-D computerrendered characters are generated by altering the pose of a “skeleton” over time. At each point in time, the pose of the character is described by a specification of the angles of each of the joints in the character’s skeleton. Each channel in the data we feed to the convolutional model represents the angle about one axis of one joint. Color image data: One channel contains the red pixels, one the green pixels, and one the blue pixels. The convolution kernel moves over both the horizontal and vertical axes of the image, conferring translation equivariance in both directions.
Color video data: One axis corresponds to time, one to the height of the video frame, and one to the width of the video frame.
Table 9.1: Examples of different formats of data that can be used with convolutional networks. 270
CHAPTER 9. CONVOLUTIONAL NETWORKS
different widths of observations over space, etc. Convolution does not make sense if the input has variable size because it can optionally include different kinds of observations. For example, if we are processing college applications, and our features consist of both grades and standardized test scores, but not every applicant took the standardized test, then it does not make sense to convolve the same weights over both the features corresponding to the grades and the features corresponding to the test scores.
9.9
Efficient Convolution Algorithms
Modern convolutional network applications often involve networks containing more than one million units. Powerful implementations exploiting parallel computation resources, as discussed in Chapter 12.1 are essential. However, in many cases it is also possible to speed up convolution by selecting an appropriate convolution algorithm. Convolution is equivalent to converting both the input and the kernel to the frequency domain using a Fourier transform, performing point-wise multiplication of the two signals, and converting back to the time domain using an inverse Fourier transform. For some problem sizes, this can be faster than the naive implementation of discrete convolution. When a d-dimensional kernel can be expressed as the outer product of d vectors, one vector per dimension, the kernel is called separable. When the kernel is separable, naive convolution is inefficient. It is equivalent to compose d onedimensional convolutions with each of these vectors. The composed approach is significantly faster than performing one k-dimensional convolution with their outer product. The kernel also takes fewer parameters to represent as vectors. If the kernel is w elements wide in each dimension, then naive multidimensional convolution requires O(w d ) runtime and parameter storage space, while separable convolution requires O(w × d) runtime and parameter storage space. Of course, not every convolution can be represented in this way. Devising faster ways of performing convolution or approximate convolution without harming the accuracy of the model is an active area of research. Even techniques that improve the efficiency of only forward propagation are useful because in the commercial setting, it is typical to devote many more resources to deployment of a network than to its training.
9.10
Random or Unsupervised Features
Typically, the most expensive part of convolutional network training is learning the features. The output layer is usually relatively inexpensive due to the small 271
CHAPTER 9. CONVOLUTIONAL NETWORKS
number of features provided as input to this layer after passing through several layers of pooling. When performing supervised training with gradient descent, every gradient step requires a complete run of forward propagation and backward propagation through the entire network. One way to reduce the cost of convolutional network training is to use features that are not trained in a supervised fashion. There are two basic strategies for obtaining convolution kernels without supervised training. One is to simply initialize them randomly. The other is to learn them with an unsupervised criterion. This approach allows the features to be determined separately from the classifier layer at the top of the architecture. One can then extract the features for the entire training set just once, essentially constructing a new training set for the last layer. Learning the last layer is then typically a convex optimization problem, assuming the last layer is something like logistic regression or an SVM. Random filters often work surprisingly well in convolutional networks (Jarrett et al., 2009b; Saxe et al., 2011; Pinto et al., 2011; Cox and Pinto, 2011). Saxe et al. (2011) showed that layers consisting of convolution following by pooling naturally become frequency selective and translation invariant when assigned random weights. They argue that this provides an inexpensive way to choose the architecture of a convolutional network: first evaluate the performance of several convolutional network architectures by training only the last layer, then take the best of these architectures and train the entire architecture using a more expensive approach. An intermediate approach is to learn the features, but using methods that do not require full forward and back-propagation at every gradient step. As with multilayer perceptrons, we use greedy layer-wise unsupervised pretraining, to train the first layer in isolation, then extract all features from the first layer only once, then train the second layer in isolation given those features, and so on. The canonical example of this is the convolutional deep belief network (Lee et al., 2009). Convolutional networks offer us the opportunity to take this strategy one step further than is possible with multilayer perceptrons. Instead of training an entire convolutional layer at a time, we can actually train a small but denselyconnected unsupervised model (such as PSD, described in Chapter 15.8.2, or k-means) of a single image patch. We can then use the weight matrices from this patch-based model to define the kernels of a convolutional layer. This means that it is possible to use unsupervised learning to train a convolutional network without ever using convolution during the training process. Using this approach, we can train very large models and incur a high computational cost only at inference time (Ranzato et al., 2007b; Jarrett et al., 2009b; Kavukcuoglu et al., 2010a; Coates et al., 2013). 272
CHAPTER 9. CONVOLUTIONAL NETWORKS
As with other approaches to unsupervised pretraining, it remains difficult to tease apart the cause of some of the benefits seen with this approach. Unsupervised pretraining may offer some regularization relative to supervised training, or it may simply allow us to train much larger architectures due to the reduced computational cost of the learning rule.
9.11
The Neuroscientific Basis for Convolutional Networks
Convolutional networks are perhaps the greatest success story of biologically inspired artificial intelligence. Though convolutional networks have been guided by many other fields, some of the key design principles of neural networks were drawn from neuroscience. The history of convolutional networks begins with neuroscientific experiments long before the relevant computational models were developed. Neurophysiologists David Hubel and Torsten Wiesel collaborated for several years to determine many of the most basic facts about how the mammalian vision system works (Hubel and Wiesel, 1959, 1962, 1968). Their accomplishements were eventually recognized with a Nobel Prize. Their findings that have had the greatest influence on contemporary deep learning models were based on recording the activity of individual neurons in cats. By anesthetizing the cat, they could immobilize the cat’s eye and observe how neurons in the cat’s brain responded to images projected in precise locations on a screen in front of the cat. Their worked helped to characterize many aspects of brain function that are beyond the scope of this book. From the point of view of deep learning, we can focus on a simplified, cartoon view of brain function. In this simplified view, we focus on a part of the brain called V1, also known as the primary visual cortex. V1 is the first area of the brain that begins to perform significantly advanced processing of visual input. In this cartoon view, images are formed by light arriving in the eye and stimulating the retina, the light-sensitive tissue in the back of the eye. The neurons in the retina perform some simple preprocessing of the image but do not substantially alter the way it is represented. The image then passes through the optic nerve and a brain region called the lateral geniculate nucleus. The main role, as far as we are concerned here, of both of these anatomical regions is primarily just to carry the signal from the eye to V1, which is located at the back of the head. A convolutional network layer is designed to capture three properties of V1: 1. V1 is arranged in a spatial map. It actually has a two-dimensional structure mirroring the structure of the image in the retina. For example, light 273
CHAPTER 9. CONVOLUTIONAL NETWORKS
arriving at the lower half of the retina affects only the corresponding half of V1. Convolutional networks capture this property by having their features defined in terms of two dimensional maps. 2. V1 contains many simple cells. A simple cell’s activity can to some extent be characterized by a linear function of the image in a small, spatially localized receptive field. The detector units of a convolutional network are designed to emulate these properties of simple cells. V1 also contains many complex cells. These cells respond to features that are similar to those detected by simple cells, but complex cells are invariant to small shifts in the position of the feature. This inspires the pooling units of convolutional networks. Complex cells are also invariant to some changes in lighting that cannot be captured simply by pooling over spatial locations. These invariances have inspired some of the cross-channel pooling strategies in convolutional networks, such as maxout units (Goodfellow et al., 2013a). Though we know the most about V1, it is generally believed that the same basic principles apply to other brain regions. In our cartoon view of the visual system, the basic strategy of detection followed by pooling is repeatedly applied as we move deeper into the brain. As we pass through multiple anatomical layers of the brain, we eventually find cells that respond to some specific concept and are invariant to many transformations of the input. These cells have been nicknamed “grandmother cells”— the idea is that a person could have a neuron that activates when seeing an image of their grandmother, regardless of whether she appears in the left or right side of the image, whether the image is a close-up of her face or zoomed out shot of her entire body, whether she is brightly lit, or in shadow, etc. These grandmother cells have been shown to actually exist in the human brain, in a region called the medial temporal lobe (Quiroga et al., 2005). Researchers tested whether individual neurons would respond to photos of famous individuals, and found what has come to be called the “Halle Berry neuron”: an individual neuron that is activated by the concept of Halle Berry. This neuron fires when a person sees a photo of Halle Berry, a drawing of Halle Berry, or even text containing the words “Halle Berry.” Of course, this has nothing to do with Halle Berry herself; other neurons responded to the presence of Bill Clinton, Jennifer Aniston, etc. These medial temporal lobe neurons are somewhat more general than modern convolutional networks, which would not automatically generalize to identifying a person or object when reading its name. The closest analog to a convolutional network’s last layer of features is a brain area called the inferotemporal cortex (IT). When viewing an object, information flows from the retina, through the LGN, to V1, then onward to V2, then V4, then IT. This happens within the first 100ms of glimpsing an object. If a person is allowed to continue looking at the 274
CHAPTER 9. CONVOLUTIONAL NETWORKS
object for more time, then information will begin to flow backwards as the brain uses top-down feedback to update the activations in the lower level brain areas. However, if we interrupt the person’s gaze, and observe only the firing rates that result from the first 100ms of mostly feed-forward activation, then IT proves to be very similar to a convolutional network. Convolutional networks can predict IT firing rates, and also perform very similarly to (time limited) humans on object recognition tasks (DiCarlo, 2013). That being said, there are many differences between convolutional networks and the mammalian vision system. Some of these differences are well known to computational neuroscientists, but outside the scope of this book. Some of these differences are not yet known, because many basic questions about how the mamalian vision system works. As a brief list: • The human eye is mostly very low resolution, except for a tiny patch called the fovea. The fovea only observes an area about the size of a thumbnail held at arms length. Though we feel as if we can see an entire scene in high resolution, this is an illusion created by the subconscious part of our brain, as it stitches together several glimpses of small areas. Most convolutional networks actual receive large full resolution photographs as input. • The human visual system is integrated with many other senses, such as hearing, and factors like our moods and thoughts. Convolutional networks so far are purely visual. • The human visual system does much more than just recognize objects. It is able to understand entire scenes including many objects and relationships between objects, and processes rich 3-D geometric information needed for our bodies to interface with the world. Convolutional networks have been applied to some of these problems but these applications are in their infancy. • Even simple brain areas like V1 are heavily impacted by feedback from higher levels. Feedback has been explored extensively in neural network models but has not yet been shown to offer a compelling improvement. • While feed-forward IT firing rates capture much of the same information as convolutional network features, it’s not clear how similar the intermediate computations are. The brain probably uses very different activation and pooling functions. An individual neuron’s activation probably is not wellcharacterized by a single linear filter response. A recent model of V1 involves multiple quadratic filters for each neuron (Rust et al., 2005). Indeed our cartoon picture of “simple cells” and “complex cells” might create a nonexistent distinction; simple cells and complex cells might both be the same 275
CHAPTER 9. CONVOLUTIONAL NETWORKS
kind of cell but with their “parameters” enabling a continuum of behaviors ranging from what we call “simple” to what we call “complex.” It’s also worth mentioning that neuroscience has told us relatively little about how to train convolutional networks. Model structures inspired by the work of Hubel and Wiesel date back to the Neocognitron (Fukushima, 1980) but the Neocognitron relied on a relatively heuristic learning algorithm. Convolutional networks did not begin to work well until they were combined with the backpropagation algorithm (LeCun et al., 1989), which was not inspired by any neuroscientific observation and is considered by some to be biologically implausible. So far we have described how simple cells are roughly linear and selective for certain features, complex cells are more non-linear and become invariant to some transformations of these simple cell features, and stacks of layers that alternative between selectivity and invariance can yield grandmother cells for very specific phenomena. We have not yet described precisely what these individual cells detect. In a deep, nonlinear network, it can be difficult to understand the function of individual cells. Simple cells in the first layer are easier to analyze, because their responses are driven by a linear function. In an artificial neural network, we can just display an image of the kernel to see what the corresponding channel of a convolutional layer responds to. In a biological neural network, we do not have access to the weights themselves. Instead, we put an electrode in the neuron itself, display several samples of white noise images in front of the animal’s retina, and record how each of these samples causes the neuron to activate. We can then fit a linear model to these responses in order to obtain an approximation of the neuron’s weights. This approach is known as reverse correlation (Ringach and Shapley, 2004). Reverse correlation shows us that most V1 cells have weights that are described by Gabor functions. The Gabor function describes the weight at a 2-D point in the image. We can think of an image as being a function of 2-D coordinates, I(x, y). Likewise, we can think of a simple cell as sampling the image at a set of locations, defined by a set of x coordinates X and a set of y coordinates, Y, and applying weights that are also a function of the location, w(x, y). From this point of view, the response of a simple cell to an image is given by XX s(I) = w(x, y)I(x, y). x∈X y∈Y
Specifically, w(x, y) takes the form of a Gabor function: w(x, y; α, βx , β y , f, φ, x0 , y0 , τ) = α exp −β xx 02 − β y y 02 cos(f x0 + φ), where
x 0 = (x − x 0 ) cos(τ) + (y − y0) sin(τ) 276
CHAPTER 9. CONVOLUTIONAL NETWORKS
and y0 = −(x − y0 ) sin(τ) + (y − y0 ) cos(τ). Here, α, βx , β y , f, φ, x 0, y 0, and τ are parameters that control the properties of the Gabor function. Fig. 9.13 shows some examples of Gabor functions with different settings of these parameters. The parameters x 0, y 0, and τ define a coordinate system. We translate and rotate x and y to form x0 and y0 . Specifically, the simple cell will respond to image features centered at the point (x 0, y0 ), and it will respond to changes in brightness as we move along a line rotated τ radians from the horizontal. Viewed as a function of x0 and y0 , the function w then responds to changes in brightness as we move along the x0 axis. It has two important factors: one is a Gaussian function and the other is a cosine function. The Gaussian factor α exp −βxx 02 − βy y02 can be seen as a gating term that ensures the simple cell will only respond to values near where x0 and y0 are both zero, in other words, near the center of the cell’s receptive field. The scaling factor α adjusts the total magnitude of the simple cell’s response, while β x and β y control how quickly its receptive field falls off. The cosine factor cos(f x0 +φ) controls how the simple cell responds to changing brightness along the x0 axis. The parameter f controls the frequency of the cosine and φ controls its phase offset. Altogether, this cartoon view of simple cells means that a simple cell responds to a specific spatial frequency of brightness in a specific direction at a specific location. They are most excited when the wave of brightness in the image has the same phase as the weights (i.e., when the image is bright where the weights are positive and dark where the weights are negative) and are most inhibited when the wave of brightness is fully out of phase with the weights (i.e., when the image is dark where the weights are positive and bright where the weights are negative). 2 The cartoon view of a complex cell is that it computes p the L norm of the 2-D vector containing two simple cell’s responses: c(I) = s0 (I)2 + s 1 (I)2. An important special case occurs when s1 has all of the same parameters as s0 except for φ, and φ is set such that s 1 is one quarter cycle out of phase with s0 . In this case, s0 and s 1 form a quadrature pair. A complex cell defined in this way responds when the Gaussian reweighted image I(x, y) exp(−β xx02 − βy y02) contains a high amplitude sinusoidal wave with frequency f in direction τ near (x 0, y0 ), regardless of the phase offset of this wave. In other words, the complex cell is invariant to small translations of the image in direction τ, or to negating the image (replacing black with white and vice versa). Some of the most striking correspondences between neuroscience and machine learning come from visually comparing the features learned by machine learning models with those employed by V1. Olshausen and Field (1996) showed that 277
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.13: Gabor functions with a variety of parameter settings. White indicates large positive weight, black indicates large negative weight, and the background gray corresponds to zero weight. Left) Gabor functions with different values of the parameters that control the coordinate system: x 0 , y0 , and τ . Each gabor function in this grid is assigned a value of x0 and y0 proportional to its position in its grid, and τ is chosen so that each Gabor is sensitive to the direction radiating out from the center of the grid. For the other two plots, x0 , y0 , and τ are fixed to zero. Center) Gabor functions with different Gaussian scale parameters betax and β y . Gabor functions are arranged in increasing width (decreasing β x ) as we move left to right through the grid, and increasing height (decreasing βy ) as we move top to bottom. For the other two plots, the β values are fixed to 1.5× the image width. Right) Gabor functions with different sinusoid parameters f and φ. As we move top to bottom, f increases, and as we move left to right, φ increases. For the other two plots, φ is fixed to 0 and f is fixed to 5× the image width.
278
CHAPTER 9. CONVOLUTIONAL NETWORKS
Figure 9.14: Many machine learning algorithms learn features that detect edges or specific colors of edges when applied to natural images. These feature detectors are reminiscent of the Gabor functions known to be present in primary visual cortex. Left) Weights learned by an unsupervised learning algorithm (spike and slab sparse coding) applied to small image patches. Right) Convolution kernels learned by the first layer of a fully supervised convolutional maxout network. Neighboring pairs of filters drive the same maxout unit.
a simple unsupervised learning algorithm, sparse coding, learns features with receptive fields similar to those of simple cells. Since then, we have found that an extremely wide variety of statistical learning algorithms learn features with Gabor-like functions when applied to natural images. This includes most deep learning algorithms, which learn these features in their first layer. Fig. 9.14 shows some examples. Because so many different learning algorithms learn edge detectors, it is difficult to conclude that any specific learning algorithm is the “right” model of the brain just based on the features that it learns (though it can certainly be a bad sign if an algorithm does not learn some sort of edge detector when applied to natural images). These features are an important part of the statistical structure of natural images and can be recovered by many different approaches to statistical modeling. See Hyv¨ arinen et al. (2009) for a review of the field of natural image statistics.
279
CHAPTER 9. CONVOLUTIONAL NETWORKS
9.12
Convolutional Networks and the History of Deep Learning
Convolutional networks have played an important role in the history of deep learning. They are a key example of a successful application of insights obtained by studying the brain to machine learning applications. They were also some of the first deep models to perform well, long before arbitrary deep models were considered viable. Convolutional networks were also some of the first neural networks to solve important commercial applications and remain at the forefront of commercial applications of deep learning today. TODO conv nets were some of first working deep backprop nets It is not entirely clear why convolutional networks succeeded when general backpropagation networks were considered to have failed. It may simply be that convolutional networks were more computationally efficient than fully connected networks, so it was easier to run multiple experiments with them and tune their implementation and hyperparameters. Larger networks also seem to be easier to train. With modern hardware, fully connected networks appear to perform reasonably on many tasks, even when using datasets that were available and activation functions that were popular during the times when fully connected networks were believed not to work well. It may be that the primary barriers to the success of neural networks were psychological. Whatever the case, it is fortunate that convolutional networks performed well and paved the way to the acceptance of neural networks in general. TODO early commercial applications (possibly just ref to applications chapter) TODO contests won with conv nets TODO current commerical applications (possibly just ref to applications chapter)
280
Chapter 10
Sequence Modeling: Recurrent and Recursive Nets One of the early ideas found in machine learning and statistical models of the 80’s is that of sharing parameters1 across different parts of a model, allowing to extend and apply the model to examples of different forms and generalize across them, e.g. with examples of different lengths, in the case of sequential data. This can be found in hidden Markov models (HMMs) (Rabiner and Juang, 1986), which were the dominant technique for speech recognition for about 30 years. These models of sequences are described a bit more in Section 10.8.3 and involve parameters, such as the state-to-state transition matrix P (st | st−1 ), which are re-used for every time step t, i.e., the above probability depends only on the value of s t and s t−1 but not on t as such. This allows one to model variable length sequences, whereas if we had specific parameters for each value of t, we could not generalize to sequence lengths not seen during training, nor share statistical strength across different sequence lengths and across different positions in time. Such sharing is particularly important when, like in speech, the input sequence can be stretched non-linearly, i.e., some parts (like vowels) may last longer in different examples. It means that the absolute time step at which an event occurs is meaningless: it only makes sense to consider the event in some context that somehow captures what has happened before. This sharing across time can also be found in a recurrent neural network (Rumelhart et al., 1986c) or RNN 2 : the same weights are used for different instances of the artificial neurons at different time steps, allowing us to apply the network to input sequences of different lengths. This idea is made 1
see Section 7.8 for an introduction to the concept of parameter sharing Unfortunately, the RNN acronym is sometimes also used for denoting Recursive Neural Networks. However, since the RNN acronym has been around for much longer, we suggest keeping this acronym for Recurrent Neural Networks. 2
281
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
more explicit in the early work on time-delay neural networks (Lang and Hinton, 1988; Waibel et al., 1989), where a fully connected network is replaced by one with local connections that are shared across different temporal instances of the hidden units. Such networks are the ancestors of convolutional neural networks, covered in more detail in Section 9. Recurrent nets are covered below in Section 10.2. As shown in Section 10.1 below, the flow graph (a notion introduced in Section 6.4 in the case of MLPs) associated with a recurrent network is structured like a chain, as explained next. Recurrent neural networks have been generalized into recursive neural networks, in which the structure can be more general, i.e., and it is typically viewed as a tree. Recursive neural networks are discussed in more detail in Section 10.5. For a good textbook on RNNs, see Graves (2012).
10.1
Unfolding Flow Graphs and Sharing Parameters
A flow graph is a way to formalize the structure of a set of computations, such as those involved in mapping inputs and parameters to outputs and loss. Please refer to Section 6.4 for a general introduction. In this section we explain the idea of unfolding a recursive or recurrent computation into a flow graph that has a repetitive structure, typically corresponding to a chain of events. For example, consider the classical form of a dynamical system: st = fθ (st−1)
(10.1)
where s t is called the state of the system. The unfolded flow graph of such a system looks like in Figure 10.1.
st1 f✓
st+1
st
f✓
f✓
f✓
Figure 10.1: Classical dynamical system equation 10.1 illustrated as an unfolded flow graph. Each node represents the state at some time t and function f θ maps the state at t to the state at t + 1. The same parameters (the same function f θ ) is used for all time steps.
As another example, let us consider a dynamical system driven by an external signal x t, st = fθ(st−1, xt ) (10.2) illustrated in Figure 10.2, where we see that the state now contains information about the whole past sequence, i.e., the above equation implicitly defines a 282
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
function st = gt(x t, xt−1, xt−2, . . . , x2 , x1 )
(10.3)
which maps the whole past sequence (xt , x t−1, xt−2 , . . . , x2 , x1 ) to the current state. Equation 10.2 is actually part of the definition of a recurrent net. We can think of st as a kind of summary of the past sequence of inputs up to t. Note that this summary is in general necessarily lossy, since it maps an arbitrary length sequence (xt , xt−1 , xt−2 , . . . , x2, x 1) to a fixed length vector s t . Depending on the training criterion, this summary might selectively keep some aspects of the past sequence with more precision than other aspects. For example, if the RNN is used in statistical language modeling, typically to predict the next word given previous words, it may not be necessary to distinctly keep track of all the bits of information, only those required to predict the rest of the sentence. The most demanding situation is when we ask st to be rich enough to allow one to approximately recover the input sequence, as in auto-encoder frameworks (Chapter 15). If we had to define a different function g t for each possible sequence length (imagine a separate neural network, each with a different input size), each with its own parameters, we would not get any generalization to sequences of a size not seen in the training set. Furthermore, one would need to see a lot more training examples, because a separate model would have to be trained for each sequence length, and it would need a lot more parameters (proportionally to the size of the input sequence). It could not generalize what it learns from what happens at a = t. By instead defining the state position t to what could happen at a position t0 6 through a recurrent formulation as in Eq. 10.2, the same parameters are used for any sequence length, allowing much better generalization properties.
st1
s f✓ x
unfold
f✓ x t1
st f✓ xt
st+1 f✓ x t+1
Figure 10.2: Left: input processing part of a recurrent neural network, seen as a circuit. The black square indicates a delay of 1 time step. Right: the same seen as an unfolded flow graph, where each node is now associated with one particular time instance.
Equation 10.2 can be drawn in two different ways. One is in a way that is inspired by how a physical implementation (such as a real neural network) might look like, i.e., like a circuit which operates in real time, as in the left of Figure 10.2. The other is as a flow graph, in which the computations occurring at different time steps in the circuit are unfolded as different nodes of the flow graph, as in 283
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
the right of Figure 10.2. What we call unfolding is the operation that maps a circuit as in the left side of the figure to a flow graph with repeated pieces as in the right side. Note how the unfolded graph now has a size that depends on the sequence length. The black square indicates a delay of 1 time step on the recurrent connection, from the state at time t to the state at time t + 1. The other important observation to make from Figure 10.2 is that the same parameters (θ) are shared over different parts of the graph, corresponding here to different time steps.
10.2
Recurrent Neural Networks
Armed with the ideas introduced in the previous section, we can design a wide variety of recurrent circuits, which are compact and simple to understand visually. As we will explain, we can automatically obtain their equivalent unfolded graph, which are useful computationally and also help focus on the idea of information flow forward in time (computing outputs and losses) and backward in time (computing gradients).
o V s U x
ot1 V
W
W unfold
st1 W U x t1
ot V
st
ot+1 V
st+1 W W U U xt x t+1
Figure 10.3: Left: vanilla recurrent network circuit with hidden-to-hidden recurrence, seen as a circuit, with weight matrices U, V , W for the three different kinds of connections (input-to-hidden, hidden-to-output, and hidden-to-hidden, respectively). Each circle indicates a whole vector of activations. Right: the same seen as an time-unfolded flow graph, where each node is now associated with one particular time instance.
Some of the early circuit designs for recurrent neural networks are illustrated in Figures 10.3, 10.4 and 10.6. Figure 10.3 shows the vanilla recurrent network whose equations are laid down below in Eq. 10.4, and which has been shown to be a universal approximation machine for sequences, i.e., able to implement a Turing machine (Siegelmann and Sontag, 1991; Siegelmann, 1995; Hyotyniemi, 1996). On the other hand, the network with output recurrence shown in The vanilla recurrent network of Figure 10.3 corresponds to the following forward propagation equations, if we assume that hyperbolic tangent non-linearities are used in the hidden units and softmax is used in output (for classification 284
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
L t−1
L y
U x
L t+1 y t+1 o t+1
y t−1 y t o t−1 ot
o V h
Lt
W
V W
unfold
W V
ht−1
U xt−1
W ht
U xt
V
W ht+1
U xt+1
Figure 10.4: Left: RNN circuit whose recurrence is only through the output. Right: computational flow graph unfolded in time. At each t, the input is xt , the hidden layer activations ht , the output ot , the target y t and the loss Lt. Such an RNN is less powerful (can express a smaller set of functions) than those in the family represented by Figure 10.3 but may be easier to train because they can exploit “teacher forcing”, i.e., constraining some of the units involved in the recurrent loop (here the output units) to take some target values during training. This architecture is less powerful because the only state information (carrying the information about the past) is the previous prediction. Unless the prediction is very high-dimensional and rich, this will usually miss important information from the past.
problems): at
= b + W st−1 + Uxt
st ot pt
= tanh(at ) = c +V st = softmax(ot)
(10.4)
where the parameters are the bias vectors b and c along with the weight matrices U , V and W , respectively for input-to-hidden, hidden-to-output, and hiddento-hidden connections. This is an example of a recurrent network that maps an input sequence to an output sequence of the same length. The total loss for a given input/target sequence pair (x, y) would then be just the sum of the losses over all the time steps, e.g., X X L(x, y) = Lt = (10.5) − log p yt t
t
where y t is the category that should be associated with time step t in the output sequence. Figure 10.4 has a more limited memory or state, which is its output, i.e., the prediction of the previous target, which potentially limits its expressive power, but also makes it easier to train. Indeed, the “intermediate state” of the corresponding 285
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
unfolded deep network is not hidden anymore: targets are available to guide this intermediate representation, which should make it easier to train. In general, the state of the RNN must be sufficiently rich to store a summary of the past sequence that is enough to properly predict the future target values. Constraining the state to be the visible variable y t itself is therefore in general not enough to learn most tasks of interest, unless, given the sequence of inputs xt , y t contains all the required information about the past y’s that is required to predict the future y’s. yˆ t ⇠ P(yt | ht)
yt
P (yt | ht)
ht xt
(xt , yt) : next input/output training pair
Figure 10.5: Illustration of teacher forcing for RNNs, which comes out naturally from the log-likelihood training objective (such as in Eq. 10.5). There are two ways in which the output variable can be fed back as input to update the next state ht : what is fed back is either the sample yˆt generated from the RNN model’s output distribution P (yt | ht ) (dashed arrow) or the actual “correct” output y t coming from the training data (dotted arrow) (x t, y t ). The former is what is done when one generates a sequence from the model, and the latter is teacher forcing and what is done during training.
Teacher forcing is the training process in which the fed back inputs are not the predicted outputs but the targets themselves, as illustrated in Figure 10.5. The disadvantage of strict teacher forcing is that if the network is going to be later used in an open-loop mode, i.e., with the network outputs (or samples from the output distribution) fed back as input, then the kind of inputs that the network will have seen during training could be quite different from the kind of inputs that it will see at test time when the network is run in generative mode, potentially yielding very poor generalizations. One way to mitigate this problem is to train with both teacher-forced inputs and with free-running inputs, e.g., predicting the correct target a number of steps in the future through the unfolded recurrent output-to-input paths. In this way, the network can learn to take into account input conditions (such as those it generates itself in the free-running mode) not 286
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
seen during training and how to map the state back towards one that will make the network generate proper outputs after a few steps. Another approach (Bengio et al., 2015) to mitigate the gap between the generative mode of RNNs and how they are trained (with teacher forcing, i.e., maximum likelihood) randomly chooses to use generated values or actual data values as input, and exploits a curriculum learning strategy to gradually use more of the generated values as input.
LT yT oT
W
ht−1 W U x t−1
ht W U xt
ht+1
V
…" W U x t+1
hT
U xT
Figure 10.6: Time-Unfolded recurrent neural network with a single output at the end of the sequence. Such a network can be used to summarize a sequence and produce a fixed-size representation used as input for further processing. There might be a target right at the end (like in the figure) or the gradient on the output o t can be obtained by back-propagating from further downstream modules.
10.2.1
Computing the Gradient in a Recurrent Neural Network
Using the generalized back-propagation algorithm (for arbitrary flow graphs) introduced in Section 6.4, one can obtain the so-called Back-Propagation Through Time (BPTT) algorithm. Once we know how to compute gradients, we can in principle apply any of the general-purpose gradient-based techniques to train an RNN. These general-purpose techniques were introduced in Section 4.3 and developed in greater depth in Chapter 8. Let us thus work out how to compute gradients by BPTT for the RNN equations above (Eqs. 10.4 and 10.5). The nodes of our flow graph will be the sequence of xt ’s, st’s, o t’s, L t’s, and the parameters U , V , W , b, c. For each node a we need to compute the gradient ∇ aL recursively, based on the gradient computed 287
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
at nodes that follow it in the graph. We start the recursion with the nodes immediately preceding the final loss ∂L =1 ∂Lt and the gradient on the outputs i at time step t, for all i, t: ∂L ∂L ∂Lt = = pt,i − 1i,yt ∂oti ∂Lt ∂oti and work our way backwards, starting from the end of the sequence, say T , at which point s T only has oT as descendent: ∂oT ∇sT L = ∇ oT L = ∇o T L V . ∂sT P Note how the above equation is vector-wise and corresponds to ∂s∂LT j = i ∂o∂LT iV ij , scalar-wise. We can then iterate backwards in time to back-propagate gradients through time, from t = T − 1 down to t = 1, noting that s t (for t < T ) has as descendents both ot and st+1 : ∇st L = ∇st+1L
∂st+1 ∂o + ∇ot L t = ∇st+1L diag(1 − s2t+1 )W + ∇o t L V ∂st ∂st
where diag(1 − s2t+1 ) indicates the diagonal matrix containing the elements 1 − s 2t+1,i, i.e., the derivative of the hyperbolic tangent associated with the hidden unit i at time t + 1. Once the gradients on the internal nodes of the flow graph are obtained, we can obtain the gradients on the parameter nodes, which have descendents at all the time steps: X ∂ot X ∇c L = ∇ ot L ∇o tL = ∂c t t X X ∂s t ∇b L = ∇ st L = ∇ st L diag(1 − s2t ) ∂b t t X X ∂ot = ∇V L = ∇ ot L ∇o tL s > t ∂V t t X X ∂s t ∇W L = ∇ st ∇ st L diag(1 − s2t )s> = t−1 ∂W t t Note in the above (and elsewhere) that whereas ∇s tL refers to the full influence ∂s t t of s t through all paths from st to L, ∂s ∂W or ∂b refers to the immediate effect of the denominator on the numerator, i.e., when we consider the denominator as a parent of the numerator and only that direct dependency is accounted for. Otherwise, we would get “double counting” of derivatives. 288
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.2.2
Recurrent Networks as Generative Directed Acyclic Models
Up to here, we have not clearly stated what the losses L t associated with the outputs ot of a recurrent net should be. It is because there are many possible ways in which RNNs can be used. In this section, we consider the most common case where the RNN models a probability distribution over a sequence of observations. When we consider a predictive log-likelihood training objective such as Eq. 10.5, we are training the RNN to estimate the conditional distribution of the next sequence element yt given the past inputs xs and targets y s (for s < t). As we show below, this corresponds to viewing the RNN as a directed graphical model, a notion introduced in Section 3.14. In this case, the set of random variables of interest is the sequence of yt’s (given the sequence of x t ’s), and we are modeling the joint probability of the yt ’s given the xt ’s. To keep things simple for starters, let us assume that there are no conditioning input sequence in addition to the output sequence, i.e., that the target output at the next time step is the next input. The random variable of interest is thus the sequence of vectors X = (x 1, x2 , . . . , xT ), and we parametrize the joint distribution of these vectors via P (X) = P (x1 , . . . , xT ) =
T Y t=1
P (x t | xt−1 , xt−2, . . . , x1 )
(10.6)
using the chain rule of conditional probabilities (Section 3.6), and where the righthand side of the bar is empty for t = 1, of course. Hence the negative log-likelihood of X according to such a model is X L= Lt t
where L t = − log P (xt | xt−1, xt−2, . . . , x 1). In general directed graphical models, x t can be predicted using only a subset of its predecessors (x1 , . . . , xt−1). However, for RNNs, the graphical model is generally fully connected, not removing any dependency a priori. This can be achieved efficiently through the recurrent parametrization, such as in Eq. 10.2, since st is trained to summarize whatever is required from the whole previous sequence (Eq. 10.3). Hence, instead of cutting statistical complexity by removing arcs in the directed graphical model for (x 1, . . . , xT ), as is done in most of the work on directed graphical models, the core idea of recurrent networks is that we introduce a state variable which decouples all the past and future observations, but 289
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
we make that state variable a function of the past, through the recurrence, Eq. 10.2. Consequently, the number of parameters required to parametrize P (xt | xt−1 , xt−2 , . . . , x1) does not grow exponentially with t (as it would if we parametrized that probability by a straight probability table, when the data is discrete) but remains constant with t. It only grows with the dimension of the state st . The price to be paid for that great advantage is that optimizing the parameters may be more difficult, as discussed below in Section 10.7. The decomposition of the likelihood thus becomes: P (x) =
T Y t=1
P (x t | g t (x t−1, xt−2, . . . , x 1))
where st = gt(xt , xt−1 , xt−2 , . . . , x2 , x 1) = fθ (st−1 , xt). Note that if the self-recurrence function fθ is learned, it can discard some of the information in some of the past values xt−k that are not needed for predicting the future data. In fact, because the state generally has a fixed dimension smaller than the length of the sequences (times the dimension of the input), it has to discard some information. However, we leave it to the learning procedure to choose what information to keep and what information to throw away. The above decomposition of the joint probability of a sequence of variables into ordered conditionals precisely corresponds to the sequence of computations performed by an RNN. The target at each time step t is the next element in the sequence, while the input at each time step is the previous element in the sequence (with all previous inputs summarized in the state), and the output is interpreted as parametrizing the probability distribution of the target given the state. This is illustrated in Figure 10.7. If the RNN is actually going to be used to generate sequences, one must also incorporate in the output information allowing to stochastically decide when to stop generating new output elements. This can be achieved in various ways. In the case when the output is a symbol taken from a vocabulary, one can add a special symbol corresponding to the end of a sequence. When that symbol is generated, a complete sequence has been generated. The target for that special symbol occurs exactly once per sequence, as the last target after the last output element xT . In general, one may train a binomial output associated with that stopping variable, for example using a sigmoid output non-linearity and the cross entropy loss, i.e., again negative log-likelihood for the event “end of the sequence”. Another kind of solution is to model the integer T itself, through any reasonable parametric distribution, and use the number of time steps left (and possibly the number of time steps since the beginning of the sequence) as extra inputs at each time step. Thus we would have decomposed P (x 1 . . . , x T ) into P (T ) and 290
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
L t−1 ot−1 V W
s t−1 W U xt−1
L t+1
Lt ot
V
st
ot+1 V
s t+1 W W U U xt+1 xt+2 xt
Figure 10.7: A generative recurrent neural network modeling P (x 1 , . . . , x T ), able to generate sequences from this distribution. Each element x t of the observed sequence serves both as input (for the current time step) and as target (for the previous time step). The output ot encodes the parameters of a conditional distribution P (x t+1 | x1 , . . . , x t) = P (x t+1 | o t) for xt+1 , given the past sequence x1 . . . , xt . The loss L t is the negative log-likelihood associated with the output prediction (or more generally, distribution parameters) ot , when the actual observed target is xt+1 . In training mode, one measures and minimizes the sum of the losses over observed sequence(s) x. In generative mode, xt is sampled from the conditional distribution P (xt+1 | x 1, . . . , x t) = P (x t+1 | ot ) (dashed arrows) and then that generated sample x t+1 is fed back as input for computing the next state st+1 , the next output ot+1 , and generating the next sample x t+2, etc.
291
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
P (x1 . . . , xT | T ). In general, one must therefore keep in mind that in order to fully generate a sequence we must not only generate the x t’s, but also the sequence length T , either implicitly through a series of continue/stop decisions (or a special “end-of-sequence” symbol), or explicitly through modeling the distribution of T itself as an integer random variable. If we take the RNN equations of the previous section (Eq. 10.4 and 10.5), they could correspond to a generative RNN if we simply make the target yt equal to the next input xt+1 (and because the outputs are the result of a softmax, it must be that the input sequence is a sequence of symbols, i.e., xt is a symbol or bounded integer). Other types of data can clearly be modeled in a similar way, following the discussions about the encoding of outputs and the probabilistic interpretation of losses as negative log-likelihoods, in Sections 5.8 and 6.3.2.
10.2.3
RNNs to Represent Conditional Probability Distributions
In general, as discussed in Section 6.3.2 (see especially the end of that section, in Subsection 6.3.2 ), when we can represent a parametric probability distribution P (y | ω), we can make it conditional by making ω a function of the desired conditioning variable: P (y | ω = f (x)). In the case of an RNN, this can be achieved in different ways, and we review here the most common and obvious choices. If x is a fixed-size vector, then we can simply make it an extra input of the RNN that generates the y sequence. Some common ways of providing an extra input to an RNN are: 1. as an extra input at each time step, or 2. as the initial state s 0, or 3. both. In general, one may need to add extra parameters (and parametrization) to map x = x into the “extra bias” going either into only s 0 , into the other st (t > 0), or into both. The first (and most commonly used) approach is illustrated in Figure 10.8. As an example, we could imagine that x is encoding the identity of a phoneme and the identity of a speaker, and that y represents an acoustic sequence corresponding to that phoneme, as pronounced by that speaker. Consider the case where the input x is a sequence of the same length as the output sequence y, and the yt’s are independent of each other when the past 292
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
L t−1 o t−1 V W
L t+1
Lt ot
V
o t+1 V
s t−1 st s t+1 W W W U U U yt+2 yt−1 yR y R t R t+1 xt
Figure 10.8: A conditional generative recurrent neural network maps a fixed-length vector x into a distribution over sequences Y. Each element yt of the observed output sequence serves both as input (for the current time step) and, during training, as target (for the previous time step). The generative semantics are the same as in the unconditional case (Fig. 10.7). The only difference is that the state is now conditioned on the input x, and the same parameters (weight matrix R in the figure) is used at every time step to parametrize that dependency. Although this was not discussed in Fig. 10.7, in both figures one should note that the length of the sequence must also be generated (unless known in advance). This could be done by a special binary output unit that encodes the fact that the next output is the last.
293
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Lt−1 ot−1 V
s t−1 W W U R yt−1R xt−1
Lt+1
Lt ot
V
ot+1 V
st+1 W W U U R yt+1 R yt+2 yt st
xt
xt+1
xt+2
Figure 10.9: A conditional generative recurrent neural network mapping a variable-length sequence x into a distribution over sequences y of the same length. This architecture assumes that the y t ’s are causally related to the xt ’s, i.e., that we want to predict the yt ’s only using the past x t ’s. Note how the prediction of yt+1 is based on both the past x’s and the past y’s. The dashed arrows indicate that y t can be generated by sampling from the output distribution ot−1 . When yt is clamped (known), it is used as a target in the loss L t−1 which measures the log-probability that yt would be sampled from the distribution o t−1.
294
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
input sequence is given, i.e., P (yt | yt−1, . . . , y 1, x) = P (yt | xt, x t−1, . . . , x1). We therefore have a causal relationship between the xt ’s and the predictions of the yt ’s, in addition to the independence of the yt ’s, given x. Under these (pretty strong) assumptions, we can return to Fig. 10.3 and interpret the t-th output ot as parameters for a conditional distribution for yt , given xt , xt−1, . . . , x1 . If we want to remove the conditional independence assumption, we can do so by making the past y t’s inputs into the state as well. That situation is illustrated in Fig. 10.9.
Lt−1
L t+1
Lt
yt−1
yt
y t+1
ot−1
ot
ot+1
gt−1
gt
gt+1
ht−1
ht
ht+1
x t−1
xt
x t+1
Figure 10.10: Computation of a typical bidirectional recurrent neural network, meant to learn to map input sequences x to target sequences y, with loss L t at each step t. The h recurrence propagates information forward in time (towards the right) while the g recurrence propagates information backward in time (towards the left). Thus at each point t, the output units ot can benefit from a relevant summary of the past in its ht input and from a relevant summary of the future in its gt input.
10.3
Bidirectional RNNs
All of the recurrent networks we have considered up to now have a “causal” structure, meaning that the state at time t only captures information from the 295
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
past, x 1, . . . , xt . However, in many applications we want to output at time t a prediction regarding an output which may depend on the whole input sequence. For example, in speech recognition, the correct interpretation of the current sound as a phoneme may depend on the next few phonemes because of co-articulation and potentially may even depend on the next few words because of the linguistic dependencies between nearby words: if there are two interpretations of the current word that are both acoustically plausible, we may have to look far into the future (and the past) to disambiguate them. This is also true of handwriting recognition and many other sequence-to-sequence learning tasks. Bidirectional recurrent neural networks (or bidirectional RNNs) were invented to address that need (Schuster and Paliwal, 1997). They have been extremely successful (Graves, 2012) in applications where that need arises, such as handwriting (Graves et al., 2008; Graves and Schmidhuber, 2009), speech recognition (Graves and Schmidhuber, 2005; Graves et al., 2013) and bioinformatics (Baldi et al., 1999). As the name suggests, the basic idea behind bidirectional RNNs is to combine a forward-going RNN and a backward-going RNN. Figure 10.10 illustrates the typical bidirectional RNN, with h t standing for the state of the forward-going RNN and g t standing for the state of the backward-going RNN. This allows the units ot to compute a representation that depends on both the past and the future but is most sensitive to the input values around time t, without having to specify a fixed-size window around t (as one would have to do with a feedforward network, a convolutional network, or a regular RNN with a fixed-size look-ahead buffer). This idea can be naturally extended to 2-dimensional input, such as images, by having four RNNs, each one going in one of the four directions: up, down, left, right. At each point (i, j) of a 2-D grid, an output o i,j could then compute a representation that would capture mostly local information but could also depend on long-range inputs, if the RNN are able to learn to carry that information.
10.4
Deep Recurrent Networks
The computation in most RNNs can be decomposed into three blocks of parameters and associated transformations: 1. from input to hidden state, 2. from previous hidden state to next hidden state, and 3. from hidden state to output, where the first two are actually brought together to map the input and previous state into the next state. With the vanilla RNN architecture (such as in Figure 10.3), each of these three blocks is associated with a single weight matrix. 296
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
In other words, when the network is unfolded, each of these uncorresponds to a shallow transformation, i.e., a single layer within a deep MLP. yt
z t-1
ht-1
yt
yt
zt
ht
ht
ht-1
xt
xt
h t-1
ht xt
Figure 10.11: A recurrent neural network can be made deep in many ways. First, the hidden recurrent state can be broken down into groups organized hierarchically (left). Second, deeper computation (e.g., an MLP in the figure) can be introduced in the inputto-hidden, hidden-to-hidden, and hidden-to-output parts (Middle). However, this may lengthen the shortest path linking different time steps, but this can be mitigated by introduced skip connections (Right). Figures from Pascanu et al. (2014a) with permission.
Would it be advantageous to introduce depth in each of these operations? Experimental evidence (Graves et al., 2013; Pascanu et al., 2014a) strongly suggests so, and this is in agreement with the idea that we need enough depth in order to perform the required mappings. See also (Schmidhuber, 1992; El Hihi and Bengio, 1996; Jaeger, 2007a) for earlier work on deep RNNs. El Hihi and Bengio (1996) first introduced the idea of decomposing the hidden state of an RNN into multiple groups of units that would operate at different time scales. Graves et al. (2013) were the first to show a significant benefit of decomposing the state of an RNN into groups of hidden units, with a restricted connectivity between the groups, e.g., as in Figure 10.11 (left). Indeed, if there were no restriction at all and no pressure for some units to represent a slower time scale, then having N groups of M hidden units would be equivalent to having a single group of N M hidden units. Koutnik et al. (2014) showed how the multiple time scales idea from El Hihi and Bengio (1996) can be advantageous on several sequential learning tasks: each group of hidden unit is updated at a different multiple of the time step index. We can also think of the lower layers in this hierarchy as playing a role in transforming the raw input into a representation that is more appropriate, at the higher levels of the hidden state. Pascanu et al. (2014a) go a step further and propose to have a separate MLP (possibly deep) for each of the three blocks enumerated above, as illustrated in Figure 10.11 (middle). It makes sense to 297
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
allocate enough capacity in each of these three steps, but having a deep state-tostate transition may also hurt: it makes the shortest path from an event at time t to an event at time t 0 > t longer. For example if a one-hidden-layer MLP is used for the state-to-state transition, we have doubled the length of that path, compared with a vanilla RNN. However, as argued by Pascanu et al. (2014a), this can be mitigated by introducing skip connections in the hidden-to-hidden path, as illustrated in Figure 10.11 (right).
L"
y"
o"
U V" x1"
U
W
W
U
V" x2"
V"
x 3"
W V" x4"
Figure 10.12: A recursive network has a computational graph that generalizes that of the recurrent network from a chain to a tree. In the figure, a variable-size sequence x1 , x2 , . . . can be mapped to a fixed-size representation (the output o), with a fixed number of parameters (e.g. the weight matrices U , V , W ). The figure illustrates a supervised learning case in which some target y is provided which is associated with the whole sequence.
298
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.5
Recursive Neural Networks
Recursive net represent yet another generalization of recurrent networks, with a different kind of computational graph, which this time looks like a tree. The typical computational graph for a recursive network is illustrated in Figure 10.12. Recursive neural networks were introduced by Pollack (1990) and their potential use for learning to reason were nicely laid down by Bottou (2011). Recursive networks have been successfully applied in processing data structures as input to neural nets (Frasconi et al., 1997, 1998), in natural language processing (Socher et al., 2011a,c, 2013) as well as in computer vision (Socher et al., 2011b). One clear advantage of recursive net over recurrent nets is that for a sequence of the same length N, depth can be drastically reduced from N to O(log N ). An open question is how to best structure the tree, though. One option is to have a tree structure which does not depend on the data, e.g., a balanced binary tree. Another is to use an external method, such as a natural language parser (Socher et al., 2011a, 2013). Ideally, one would like the learner itself to discover and infer the tree structure that is appropriate for any given input, as suggested in Bottou (2011). Many variants of the recursive net idea are possible. For example, in Frasconi et al. (1997, 1998), the data is associated with a tree structure in the first place, and inputs and/or targets with each node of the tree. The computation performed by each node does not have to be the traditional artificial neuron computation (affine transformation of all inputs followed by a monotone non-linearity). For example, Socher et al. (2013) propose using tensor operations and bilinear forms, which have previously been found useful to model relationships between concepts (Weston et al., 2010; Bordes et al., 2012) when the concepts are represented by continuous vectors (embeddings).
10.6
Auto-Regressive Networks
One of the basic ideas behind recurrent networks is that of directed graphical models with a twist: we decompose a probability distribution as a product of conditionals without explicitly cutting any arc in the graphical model, but instead reducing complexity by parametrizing the transition probability in a recursive way that requires a fixed (and not exponential) number of parameters, due to a form of parameter sharing (see Section 7.8 for an introduction to the concept). Instead of reducing P (x t | xt−1 , . . . , x1 ) to something like P (xt | xt−1, . . . , x t−k) (assuming the k previous ones as the parents), we keep the full dependency but we parametrize the conditional efficiently in a way that does not grow with t, exploiting parameter sharing. When the above conditional probability distribu299
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
tion is in some sense stationary, i.e., the relation between the past and the next observation does not depend on t, only on the values of the past observations, then this form of parameter sharing makes a lot of sense, and for recurrent nets it allows to use the same model for sequences of different lengths. Auto-regressive networks are similar to recurrent networks in the sense that we also decompose a joint probability over the observed variables as a product of conditionals of the form P (x t | xt−1, . . . , x1) but we drop the form of parameter sharing that makes these conditionals all share the same parametrization. This makes sense when the variables are not elements of a translation-invariant sequence, but instead form an arbitrary tuple without any particular ordering that would correspond to a translation-invariant form of relationship between variables at position k and variables at position k0 . Such models have been called fullyvisible Bayes networks (Frey et al., 1996) and used successfully in many forms, first with logistic regression for each conditional distribution (Frey, 1998) and then with neural networks (Bengio and Bengio, 2000b; Larochelle and Murray, 2011). In some forms of auto-regressive networks, such as NADE (Larochelle and Murray, 2011), described in Section 10.6.3 below, we can re-introduce a form of parameter sharing that is different from the one found in recurrent networks, but that brings both a statistical advantage (less parameters) and a computational advantage (less computation). Although we drop the sharing over time, as we see below in Section 10.6.2, using a deep learning concept of reuse of features, we can share features that have been computed for predicting x t−k with the sub-network that predicts x t.
P(x1 )# P(x |x ) P(x3 |x2#,x 1 )#P(x |x ,#x ,x ) 2 1# 4 3# 2# 1 # #
x 1#
x2#
x3#
x4#
x 1#
x2#
x 3#
x4#
Figure 10.13: An auto-regressive network predicts the i-th variable from the i−1 previous ones. Left: corresponding graphical model (which is the same as that of a recurrent network). Right: corresponding computational graph, in the case of the logistic autoregressive network, where each prediction has the form of a logistic regression, i.e., with i free parameters (for the i−1 weights associated with i−1 inputs, and an offset parameter). 300
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.6.1
Logistic Auto-Regressive Networks
Let us first consider the simplest auto-regressive network, without hidden units, and hence no sharing at all. Each P (xt | x t−1 , . . . , x1) is parametrized as a linear model, e.g., a logistic regression. This model was introduced by Frey (1998) and has O(T 2) parameters when there are T variables to be modeled. It is illustrated in Figure 10.13, showing both the graphical model (left) and the computational graph (right). A clear disadvantage of the logistic auto-regressive network is that one cannot easily increase its capacity in order to capture more complex data distributions. It defines a parametric family of fixed capacity, like the linear regression, the logistic regression, or the Gaussian distribution. In fact, if the variables are continuous, one gets a linear auto-regressive model, which is thus another way to formulate a Gaussian distribution, i.e., only capturing pairwise interactions between the observed variables. P(x1 )# P(x2 |x1)# P(x3 |x2#,x1 )# P(x |x ,#x ,x ) 4 3# 2# 1 # #
h2#
h1#
x 1#
h3#
x2#
x3#
x4#
Figure 10.14: A neural auto-regressive network predicts the i-th variable x i from the i− 1 previous ones, but is parametrized so that features (groups of hidden units denoted hi ) that are functions of x 1, . . . , x i can be reused in predicting all of the subsequent variables xi+1 , x i+2, . . ..
10.6.2
Neural Auto-Regressive Networks
Neural Auto-Regressive Networks have the same left-to-right graphical model as logistic auto-regressive networks (Figure 10.13, left) but a different parametrization that is at once more powerful (allowing to extend the capacity as needed and 301
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
approximate any joint distribution) and can improve generalization by introducing a parameter sharing and feature sharing principle common to deep learning in general. The first paper on neural auto-regressive networks by Bengio and Bengio (2000b) (see also Bengio and Bengio (2000a) for the more extensive journal version) were motivated by the objective to avoid the curse of dimensionality arising out of traditional non-parametric graphical models, sharing the same structure as Figure 10.13 (left). In the non-parametric discrete distribution models, each conditional distribution is represented by a table of probabilities, with one entry and one parameter for each possible configuration of the variables involved. By using a neural network instead, two advantages are obtained: 1. The parametrization of each P (x t | xt−1, . . . , x 1 ) by a neural network with (t − 1) × k inputs and k outputs (if the variables are discrete and take k values, encoded one-hot) allows to estimate the conditional probability without requiring an exponential number of parameters (and examples), yet still allowing to capture high-order dependencies between the random variables. 2. Instead of having a different neural network for the prediction of each x t , a left-to-right connectivity illustrated in Figure 10.14 allows to merge all the neural networks into one. Equivalently, it means that the hidden layer features computed for predicting x t can be reused for predicting xt+k (k > 0). The hidden units are thus organized in groups that have the particularity that all the units in the t-th group only depend on the input values x1, . . . , x t . In fact the parameters used to compute these hidden units are jointly optimized to help the prediction of all the variables xt , xt+1 , xt+2 , . . .. This is an instance of the reuse principle that makes multi-task learning and transfer learning successful with neural networks and deep learning in general (See Sections 7.12 and 16.2). Each P (x t | x t−1 , . . . , x1) can represent a conditional distribution by having outputs of the neural network predict parameters of the conditional distribution of xt , as discussed in Section 6.3.2. Although the original neural auto-regressive networks were initially evaluated in the context of purely discrete multivariate data (e.g., with a sigmoid - Bernoulli case - or softmax output - multinoulli case) it is straightforward to extend such models to continuous variables or joint distributions involving both discrete and continuous variables, as for example with RNADE introduced below (Uria et al., 2013).
302
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
P(x 1)# P(x2|x1)# P(x 3|x2# ,x1)# P(x |x ,#x ,x ) 4 3# 2# 1 # #
h 2#
h1# W 1#
W1#
h3# W 2# W 3#
W1#
x 1#
W2#
x2#
x 3#
x 4#
Figure 10.15: NADE (Neural Auto-regressive Density Estimator) is a neural autoregressive network, i.e., the hidden units are organized in groups h j so that only the inputs x 1, . . . , x i participate in computing hi and predicting P (xj | x j−1 , . . . , x1 ), for j > i. The particularity of NADE is the use of a particular weight sharing pattern: the same W 0jki = W ki is shared (same color and line pattern in the figure) for all the weights outgoing from x i to the k-th unit of any group j ≥ i. The vector (W1i, W 2i, . . .) is denoted W :,i here.
10.6.3
A very successful recent form of neural auto-regressive network was proposed by Larochelle and Murray (2011). The architecture is basically the same as for the original neural auto-regressive network of Bengio and Bengio (2000b) except for the introduction of a weight-sharing scheme: as illustrated in Figure 10.15. The parameteres of the hidden units of different groups j are shared, i.e., the 0 weights W jki from the i-th input xi to the k-th element of the j-th group of hidden unit h jk (j ≥ i) are shared: W 0jki = Wki with (W1i, W 2i, . . .) denoted W :,i in Figure 10.15. This particular sharing pattern is motivated in Larochelle and Murray (2011) by the computations performed in the mean-field inference3 of an RBM, when only 3
Here, unlike in Section 13.5, the inference is over some of the input variables that are missing, given the observed ones. 303
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
the first i inputs are given and one tries to infer the subsequent ones. This meanfield inference corresponds to running a recurrent network with shared weights and the first step of that inference is the same as in NADE. The only difference is that with the proposed NADE, the output weights are not forced to be simply transpose values of the input weights (they are not tied). One could imagine actually extending this procedure to not just one time step of the mean-field recurrent inference but to k steps, as in Raiko et al. (2014). Although the neural auto-regressive networks and NADE were originally proposed to deal with discrete distributions, they can in principle be generalized to continuous ones by replacing the conditional discrete probability distributions (for P (xj | x j−1 , . . . , x1 )) by continuous ones and following general practice to predict continuous random variables with neural networks (see Section 6.3.2) using the log-likelihood framework. A fairly generic way of parametrizing a continuous density is as a Gaussian mixture, and this avenue has been successfully evaluated for the neural auto-regressive architecture with RNADE (Uria et al., 2013). One interesting point to note is that stochastic gradient descent can be numerically ill-behaved due to the interactions between the conditional means and the conditional variances (especially when the variances become small). Uria et al. (2013) have used a heuristic to rescale the gradient on the component means by the associated standard deviation which seems to have helped optimizing RNADE. Another very interesting extension of the neural auto-regressive architectures gets rid of the need to choose an arbitrary order for the observed variables (Murray and Larochelle, 2014). The idea is to train the network to be able to cope with any order by randomly sampling orders and providing the information to hidden units specifying which of the inputs are observed (on the - right - conditioning side of the bar) and which are to be predicted and are thus considered missing (on the - left - side of the conditioning bar). This is nice because it allows to use a trained auto-regressive network to perform any inference (i.e. predict or sample from the probability distribution over any subset of variables given any subset) extremely efficiently. Finally, since many orders are possible, the joint probability of some set of variables can be computed in many ways (n! for n variables), and this can be exploited to obtain a more robust probability estimation and better log-likelihood, by simply averaging the log-probabilities predicted by different randomly chosen orders. In the same paper, the authors propose to consider deep versions of the architecture, but unfortunately that immediately makes computation as expensive as in the original neural auto-regressive neural network (Bengio and Bengio, 2000b). The first layer and the output layer can still be computed in O(nh) multiply-add operations, as in the regular NADE, where h is the number of hidden units (the size of the groups hi, in Figures 10.15 and 10.14), whereas it is O(n 2h) in Bengio and Bengio (2000b). However, for the other hidden layers, 304
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
the computation is O(n2 h 2 ) if every “previous” group at layer l participates in predicting the “next” group at layer l + 1, assuming n groups of h hidden units at each layer. Making the i-th group at layer l + 1 only depend on the i-th group, as in Murray and Larochelle (2014) at layer l reduces it to O(nh 2), which is still h times worse than the regular NADE.
10.7
Facing the Challenge of Long-Term Dependencies
The mathematical challenge of learning long-term dependencies in recurrent networks was introduced in Section 8.2.5. The basic problem is that gradients propagated over many stages tend to either vanish (most of the time) or explode (rarely, but with much damage to the optimization). Even if we assume that the parameters are such that the recurrent network is stable (can store memories, with gradients not exploding), the difficulty with long-term dependencies arises from the exponentially smaller weights given to long-term interactions (involving the multiplication of many Jacobians) compared short-term ones. See Hochreiter (1991); Doya (1993); Bengio et al. (1994); Pascanu et al. (2013a) for a deeper treatment. In this section we discuss various approaches that have been proposed to alleviate this difficulty with learning long-term dependencies.
10.7.1
Echo State Networks: Choosing Weights to Make Dynamics Barely Contractive
The recurrent weights and input weights of a recurrent network are those that define the state representation captured by the model, i.e., how the state st (hidden units vector) at time t (Eq. 10.2) captures and summarizes information from the previous inputs x1, x 2, . . . , xt . Since learning the recurrent and input weights is difficult, one option that has been proposed (Jaeger and Haas, 2004; Jaeger, 2007b; Maass et al., 2002) is to set those weights such that the recurrent hidden units do a good job of capturing the history of past inputs, and only learn the ouput weights. This is the idea that was independently proposed for Echo State Networks or ESNs (Jaeger and Haas, 2004; Jaeger, 2007b) and Liquid State Machines (Maass et al., 2002). The latter is similar, except that it uses spiking neurons (with binary outputs) instead of the continuous valued hidden units used for ESNs. Both ESNs and liquid state machines are termed reservoir computing (Lukoˇseviˇcius and Jaeger, 2009) to denote the fact that the hidden units form of reservoir of temporal features which may capture different aspects of the history of inputs. 305
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Sutskever’s empirical observation. More recently, it has been shown that the techniques used to set the weights in ESNs could be used to initialize the weights in a fully trainable recurrent network (e.g., trained using back-propagation through time), helping to learn long-term dependencies (Sutskever, 2012; Sutskever et al., 2013). In addition to setting the spectral radius to 1.2, Sutskever sets the recurrent weight matrix to be initially sparse, with only 15 non-zero input weights per hidden unit. Note that when some eigenvalues of the Jacobian are exactly 1, information can be kept in a stable way, and back-propagated gradients neither vanish nor explode. The next two sections show methods to make some paths in the unfolded graph correspond to “multiplying by 1” at each step, i.e., keeping information for a very long time.
10.7.2
Combining Short and Long Paths in the Unfolded Flow Graph
An old idea that has been proposed to deal with the difficulty of learning longterm dependencies is to use recurrent connections with long delays. Whereas the ordinary recurrent connections are associated with a delay of 1 (relating the state at t with the state at t + 1), it is possible to construct recurrent networks with longer delays (Bengio, 1991), following the idea of incorporating delays in feedforward neural networks (Lang and Hinton, 1988) in order to capture temporal structure (with Time-Delay Neural Networks, which are the 1-D predecessors of Convolutional Neural Networks, discussed in Chapter 9).
307
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
o
ot−1 W3
W1
st−2
s W3
x
unfold
ot W3
W3
st−1 xt−1
W1
xt
W3
st+1
st
W1
W1
ot+1
W1
xt+1
Figure 10.16: A recurrent neural networks with delays, in which some of the connections reach back in time to more than one time step. Left: connectivity of the recurrent net, with square boxes indicating the number of time delays associated with a connection. Right: unfolded recurrent network. In the figure there are regular recurrent connections with a delay of 1 time step (W 1) and recurrent connections with a delay of 3 time steps (W3 ). The advantage of these longer-delay connections is that they allow to connect past states to future states through shorter paths (3 times shorter, here), going through these longer delay connections (in red).
As we have seen in Section 8.2.5, gradients will vanish exponentially with respect to the number of time steps. If we have recurrent connections with a time-delay of d, then instead of the vanishing or explosion going as O(λT ) over t ), the unT time steps (where λ is the largest eigenvalue of the Jacobians ∂s∂st−1 folded recurrent network now has paths through which gradients grow as O(λ T/d) because the number of effective steps is T /d. This allows the learning algorithm to capture longer dependencies although not all long-term dependencies may be well represented in this way. This idea was first explored in Lin et al. (1996) and is illustrated in Figure 10.16.
10.7.3
Leaky Units and a Hierarchy of Different Time Scales
A related idea in order to obtain paths on which the product of derivatives is close to 1 is to have units with linear self-connections and a weight near 1 on these connections. The strength of that linear self-connection corresponds to a time scale and thus we can have different hidden units which operate at different time scales (Mozer, 1992). Depending on how close to 1 these self-connection weights are, information can travel forward and gradients backward with a different rate of “forgetting” or contraction to 0, i.e., a different time scale. One can view this idea as a smooth variant of the idea of having different delays in the connections presented in the previous section. Such ideas were proposed in Mozer (1992); ElHihi and Bengio (1996), before a closely related idea discussed in the next 308
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
section of gating these self-connections in order to let the network control at what rate each unit should be contracting. The idea of leaky units with a self-connection actually arises naturally when considering a continuous-time recurrent neural network such as s˙ iτ i = −s i + σ(b i + W s + Ux) where σ is the neural non-linearity (e.g., sigmoid or tanh), τ i > 0 is a time constant and s˙i indicates the temporal derivative of unit s i. A related equation is s˙i τi = −si + (b i + W σ(s) + U x) where the state vector s (with elements si ) now represents the pre-activation of the hidden units. When discretizing in time such equations (which changes the meaning of τ ), one gets st,i 1 + σ(bi + W st + Uxt ) τi τi 1 1 = (1 − )st,i + σ(b i + W st + Ux t). τi τi
st+1,i − s t,i = − st+1,i
(10.7)
We see that the new value of the state is a convex linear combination of the old value and of the value computed based on current inputs and recurrent weights, if 1 ≤ τi < ∞. When τi = 1, there is no linear self-recurrence, only the nonlinear update which we find in ordinary recurrent networks. When τi > 1, this linear recurrence allows gradients to propagate more easily. When τ i is large, the state changes very slowly, integrating the past values associated with the input sequence. By associating different time scales τi with different units, one obtains different paths corresponding to different forgetting rates. Those time constants can be fixed manually (e.g., by sampling from a distribution of time scales) or can be learned as free parameters, and having such leaky units at different time scales appears to help with long-term dependencies (Mozer, 1992; Pascanu et al., 2013a). Note that the time constant τ thus corresponds to a self-weight of (1 − 1τ), but without any non-linearity involved in the self-recurrence. Consider the extreme case where τ → ∞: because the leaky unit just averages contributions from the past, the contribution of each time step is equivalent and there is no associated vanishing or exploding effect. An alternative is to avoid the weight of τ1i in front of σ(bi + W st + U xt ), thus making the state sum all the past values when τi is large, instead of averaging them.
309
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
10.7.4
The Long-Short-Term-Memory Architecture and Other Gated RNNs
Whereas in the previous section we consider creating paths where derivatives neither vanish nor explode too quickly by introducing self-loops, leaky units have self-weights that are not context-dependent: they are fixed, or learned, but remain constant during a whole test sequence.
output X
output gate self-loop +
X
state forget gate X
input gate input
Figure 10.17: Block diagram of the LSTM recurrent network “cell”. Cells are connected recurrently to each other, replacing the usual hidden units of ordinary recurrent networks. An input feature is computed with a regular artificial neuron unit, and its value can be accumulated into the state if the sigmoidal input gate allows it. The state unit has a linear self-loop whose weight is controlled by the forget gate. The output of the cell can be shut off by the output gate. All the gating units have a sigmoid non-linearity, while the input unit can have any squashing non-linearity. The state unit can also be used as extra input to the gating units. The black square indicates a delay of 1 time unit.
It is worthwhile considering the role played by leaky units: they allow to accumulate information (e.g. evidence for a particular feature or category) over a long duration. However, once that information gets used, it might be useful for the neural network to forget the old state. For example, if a sequence is made of subsequences and we want a leaky unit to accumulate evidence inside each subsubsequence, we need a mechanism to forget the old state by setting it to zero and starting to count from fresh. Instead of manually deciding when to clear the state, we want the neural network to learn to decide when to do it. 310
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
LSTM This clever idea of conditioning the forgetting on the context is a core contribution of the Long-Short-Term-Memory (LSTM) algorithm (Hochreiter and Schmidhuber, 1997), described below. Several variants of the LSTM are found in the literature (Hochreiter and Schmidhuber, 1997; Graves, 2012; Graves et al., 2013; Graves, 2013; Sutskever et al., 2014a) but the principle is always to have a linear self-loop through which gradients can flow for long durations. By making the weight of this self-loop gated (controlled by another hidden unit), the time scale of integration can be changed dynamically (even for fixed parameters, but based on the input sequence). The LSTM has been found extremely successful in a number of applications, such as unconstrained handwriting recognition (Graves et al., 2009), speech recognition (Graves et al., 2013; Graves and Jaitly, 2014), handwriting generation (Graves, 2013), machine translation (Sutskever et al., 2014a), image to text conversion (captioning) (Kiros et al., 2014b; Vinyals et al., 2014b; Xu et al., 2015b) and parsing (Vinyals et al., 2014a). The LSTM block diagram is illustrated in Figure 10.17. The corresponding forward (state update equations) are follows, in the case of the vanilla recurrent network architecture. Deeper architectures have been successfully used in Graves et al. (2013); Pascanu et al. (2014a). Instead of a unit that simply applies a squashing function on the affine transformation of inputs and recurrent units, LSTM networks have “LSTM cells”. Each cell has the same inputs and outputs as a vanilla recurrent network, but has more parameters and a system of gating units that controls the flow of information. The most important component is the state unit s t that has a linear self-loop similar to the leaky units described in the previous section, but where the self-loop weight (or the associated time constant) f is controlled by a forget gate unit h t,i (for time step t and cell i), that sets this weight to a value between 0 and 1 via a sigmoid unit: X f X f f f h t,i = sigmoid(bi + U ijxt,j + Wijh t,j ). (10.8) j
j
where xt is the current input vector and ht is the current hidden layer vector, containing the outputs of all the LSTM cells, and b f ,Uf , W f are respectively biases, input weights and recurrent weights for the forget gates. The LSTM cell internal state is thus updated as follows, following the pattern of Eq. 10.7, but with a conditional self-loop weight hft,i : X X st+1,i = hft,is t,i + het,iσ(bi + U ijxt,j + W ij h t,j). (10.9) j
j
b, U and W respectively the biases, input weights and recurrent weights into the LSTM cell, and the external input gate unit het,i is computed similarly to the 311
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
forget gate (i.e., with a sigmoid unit to obtain a gating value between 0 and 1), but with its own parameters: X X h et,i = sigmoid(bei + U eijxt,j + W eijh t,j). (10.10) j
j
The output h t+1,i of the LSTM cell can also be shut off, via the output gate h ot,i, which also uses a sigmoid unit for gating: h t+1,i = tanh(s t+1,i )hot,i X X hot,i = sigmoid(boi + Uoijx t,j + W ijo ht,j ) j
(10.11)
j
which has parameters bo , U o , W o for its biases, input weights and recurrent weights, respectively. Among the variants, one can choose to use the cell state s t,i as an extra input (with its weight) into the three gates of the i-th unit, as shown in Figure 10.17. This would require three additional parameters. LSTM networks have been shown to learn long-term dependencies more easily than vanilla recurrent architectures, first on artificial data sets designed for testing the ability to learn long-term dependencies Bengio et al. (1994); Hochreiter and Schmidhuber (1997); Hochreiter et al. (2000), then on challenging sequence processing tasks where state-of-the-art performance was obtained (Graves, 2012; Graves et al., 2013; Sutskever et al., 2014a). Other Gated RNNs Which pieces of the LSTM architecture are actually necessary? What other successful architectures could be designed that allow the network to dynamically control the time scale and forgetting behavior of different units? Some answers to these questions are given with the recent work on gated RNNs, which was successfully used in reaching the MOSES state-of-the-art for English-to-French machine translation (Cho et al., 2014). The main difference with the LSTM is that a single gating unit simultaneously controls the forgetting factor and the decision to update the state unit, which is natural if we consider the continuous-time interpretation of the self-weight of the state, as in the equation for leaky units, Eq. 10.7. The update equations are the following: X X h t+1,i = h ut,ih t,i + (1 − hut,i )σ(b i + Uij xt,j + Wij hrt,j h t,j). (10.12) j
j
where g u stands for “update” gate and g r for “reset” gate. Their value is defined as usual: hut,i = sigmoid(bui + U iju xt,j + Wijuh t,j) (10.13) j
j
X
X
312
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
and h rt,i = sigmoid(bri +
X
U rijxt,j +
j
X W rijh t,j).
(10.14)
j
Many more variants around this theme can be designed. For example the reset gate (or forget gate) output could be shared across a number of hidden units. Or the product of a global gate (covering a whole group of units, e.g., a layer) and a local gate (per unit) could be used to combine global control and local control. In addition, as discussed in the next section, different ways of making such RNNs “deeper” are possible.
10.7.5
Better Optimization
A central optimization difficulty with RNNs regards the learning of long-term dependencies (Hochreiter, 1991; Bengio et al., 1993, 1994). This difficulty has been explained in detail in Section 8.2.5. The gist of the problem is that the composition of the non-linear recurrence with itself over many many time steps yields a highly non-linear function whose derivatives (e.g. of the state at T w.r.t. T the state at t < T , i.e. the Jacobian matrix ∂s ∂st ) tend to either vanish or explode as T −t increases, because it is equal to the product of the state transition Jacobian matrices ∂s∂st+1 ) t If it explodes, the parameter gradient ∇ θ L also explodes, yielding gradientbased parameter updates that are poor. A simple solution to this problem is discussed in the next section (Sec. 10.7.6). However, as discussed in Bengio et al. (1994), if the state transition Jacobian matrix has eigenvalues that are larger than 1 in magnitude, then it can yield to “unstable” dynamics, in the sense that a bit of information cannot be stored reliably for a long time in the presence of input “noise”. Indeed, the state transition Jacobian matrix eigenvalues indicate how a small change in some direction (the corresponding eigenvector) will be expanded (if the eigenvalue is greater than 1) or contracted (if it is less than 1). If the eigenvalues of the state transition Jacobian are less than 1, then derivatives associated with long-term effects tend to vanish as T − t increases, making them exponentially smaller in magnitude (as components of the total gradient) then derivatives associated with short-term effects. This therefore makes it difficult (but not impossible) to learn long-term dependencies. An interesting idea proposed in Martens and Sutskever (2011) is that at the same time as first derivatives are becoming smaller in directions associated with long-term effects, so may the higher derivatives. In particular, if we use a secondorder optimization method (such as the Hessian-free method of Martens and Sutskever (2011)), then we could differentially treat different directions: divide the small first derivative (gradient) by a small second derivative, while not scaling up in the directions where the second derivative is large (and hopefully, the first 313
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
derivative as well). Whereas in the scalar case, if we add a large number and a small number, the small number is “lost”, in the vector case, if we add a large vector with a small vector, it is still possible to recover the information about the direction of the small vector if we have access to information (such as in the second derivative matrix) that tells us how to rescale appropriately each direction. One disadvantage of many second-order methods, including the Hessian-free method, is that they tend to be geared towards “batch” training rather than “stochastic” updates (where only one or a small minibatch of examples are examined before a parameter update is made). Although the experiments on recurrent networks applied to problems with long-term dependencies showed very encouraging results in Martens and Sutskever (2011), it was later shown that similar results could be obtained by much simpler methods (Sutskever, 2012; Sutskever et al., 2013) involving better initialization, a cheap surrogate to second-order optimization (a variant on the momentum technique, Section 8.4), and the clipping trick described below.
10.7.6
As discussed in Section 8.2.4, strongly non-linear functions such as those computed by a recurrent net over many time steps tend to have derivatives that can be either very large or very small in magnitude. This is illustrated in Figures 8.2 and 8.3, in which we see that the objective function (as a function of the parameters) has a “landscape” in which one finds “cliffs”: wide and rather flat regions separated by tiny regions where the objective function changes quickly, forming a kind of cliff. The difficulty that arises is that when the parameter gradient is very large, a gradient descent parameter update could throw the parameters very far, into a region where the objective function is larger, wasting a lot of the work that had been down to reach the current solution. This is because gradient descent is hinged on the assumption of small enough steps, and this assumption can easily be violated when the same learning rate is used for both the flatter parts and the steeper parts of the landscape.
314
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Figure 10.18: Example of the effect of gradient clipping in a recurrent network with two paramters w and b. Vertical axis is the objective function to minimize. Note the cliff where the gradient explodes and from where gradient descent can get pushed very far. Clipping the gradient when its norm is above a threshold (Pascanu et al., 2013a) prevents this catastrophic outcome and helps training recurrent nets with long-term dependencies to be captured.
A simple type of solution has been in used by practitioners for many years: clipping the gradient. There are different instances of this idea (Mikolov, 2012; Pascanu et al., 2013a). One option is to clip the gradient element-wise (Mikolov, 2012). Another is to clip the norm of the gradient (Pascanu et al., 2013a).The latter has the advantage that it guarantees that each step is still in the gradient direction, but experiments suggest that both forms work similarly. Even simply taking a random step when the gradient magnitude is above a threshold tends to work almost as well.
10.7.7
Regularizing to Encourage Information Flow
Whereas clipping helps dealing with exploding gradients, it does not help with vanishing gradients. To address vanishing gradients and better capture long-term dependencies, we discussed the idea of creating paths in the computational graph of the unfolded recurrent architecture along which the product of gradients associated with arcs is near 1. One approach to achieve this is with LSTM and other self-loops and gating mechanisms, described above in Section 10.7.4. Another idea is to regularize or constrain the parameters so as to encourage “information flow”. In particular, we would like the gradient vector ∇s tL being back-propagated to maintain its magnitude (even if there is only a loss at the end of the sequence),
315
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
i.e., we want ∇s tL
∂st ∂s t−1
to be as large as ∇st L. With this objective, Pascanu et al. (2013a) propose the following regularizer: 2 ∂st X |∇ s tL ∂s t−1 | − 1 . Ω= (10.15) ||∇ L| | s t t
It looks like computing the gradient of this regularizer is difficult, but Pascanu et al. (2013a) propose an approximation in which we consider the back-propagated vectors ∇ stL as if they were constants (for the purpose of this regularizer, i.e., no need to back-prop through them). The experiments with this regularizer suggest that, if combined with the norm clipping heuristic (which handles gradient explosion), it can considerably increase the span of the dependencies that an RNN can learn. Because it keeps the RNN dynamics on the edge of explosive gradients, the gradient clipping is particularly important: otherwise gradient explosion prevents learning to succeed.
10.7.8
Organizing the State at Multiple Time Scales
Another promising approach to handle long-term dependencies is the old idea of organizing the state of the RNN at multiple time-scales (El Hihi and Bengio, 1996), with information flowing more easily through long distances at the slower time scales. This is illustrated in Figure 10.19.
Figure 10.19: Example of a multi-scale recurrent net architecture (unfolded in time), with higher levels operating at a slower time scale. Information can flow unhampered (either forward or backward in time) over longer durations at the higher levels, thus creating long-paths (such as the red dotted path) through which long-term dependencies between elements of the input/output sequence can be captured.
There are different ways in which a group of recurrent units can be forced to operate at different time scales. One option is to make the recurrent units leaky 316
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
(as in Eq. 10.7), but to have different groups of units associated with different fixed time scales. This was the proposal in Mozer (1992) and has been successfully used in Pascanu et al. (2013a). Another option is to have explicit and discrete updates taking place at different times, with a different frequency for different groups of units, as in Figure 10.19. This is the approach of El Hihi and Bengio (1996); Koutnik et al. (2014) and it also worked well on a number of benchmark datasets.
10.8
Handling Temporal Dependencies with N-Grams, HMMs, CRFs and Other Graphical Models
This section regards probabilistic approches to sequential data modeling which have often been viewed as in competition with RNNs, although RNNs can be seen as a particular form of dynamical Bayes nets (as directed graphical models with deterministic latent variables).
10.8.1
N-grams
N-grams are non-parametric estimators of conditional probabilities based on counting relative frequencies of occurence, and they have been the core building block of statistical language modeling for many decades (Jelinek and Mercer, 1980; Katz, 1987; Chen and Goodman, 1999). Like RNNs, they are based on the product rule (or chain rule) decomposition of the joint probability into conditionals, Eq. 10.6, which relies on estimates P (xt | xt−1, . . . , x 1) to compute P (x1, . . . , x T). What is particular of n-grams is that 1. they estimate these conditional probabilities based only on the last n − 1 values (to predict the next one) 2. they assume that the data is symbolic, i.e., x t is a symbol taken from a finite alphabet V (for vocabulary), and 3. the conditional probability estimates are obtained from frequency counts of all the observed length-n subsequences, hence the names unigram (for n=1), bigram (for n=2), trigram (for n=3), and n-gram in general. The maximum likelihood estimator for these conditional probabilities is simply the relative frequency of occurence of the left hand symbol in the context of the right hand symbols, compared to all the other possible symbols in V: P (xt | xt−1, . . . , xt−n+1) =
#{x t, x t−1, . . . , xt−n+1 } x t ∈V #{x t, xt−1 , . . . , xt−n+1 } P 317
(10.16)
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
where #{a, b, c} denotes the cardinality of the set of tuples (a, b, c) in the training set, and where the denominator is also a count (if border effects are handled properly). A fundamental limitation of the above estimator is that it is very likely to be zero in many cases, even though the tuple (x t, xt−1 , . . . , xt−n+1) may show up in the test set. In that case, the test log-likelihood would be infinitely bad (−∞). To avoid that catastrophic outcome, n-grams employ some form of smoothing, i.e., techniques to shift probability mass from the observed tuples to unobserved ones that are similar (a central idea behind most non-parametric statistical methods). See Chen and Goodman (1999) for a review and empirical comparisons. One basic technique consists in assigning a non-zero probability mass to any of the possible next symbol values. Another very popular idea consists in backing off, or mixing (as in mixture model), the higher-order n-gram predictor with all the lower-order ones (with smaller n). Back-off methods look-up the lower-order ngrams if the frequency of the context xt−1, . . . , xt−n+1 is too small, i.e., considering the contexts xt−1 , . . . , xt−n+k , for increasing k, until a sufficiently reliable estimate is found. Another interesting idea that is related to neural language models (Section 12.4) is to break up the symbols into classes (by some form of clustering) and back-up to, or mix with, less precise models that only consider the classes of the words in the context (i.e. aggregating statistics from a larger portion of the training set). One can view the word classes as a very impoverished learned representation of words which help to generalize (across words of the same class). What distributed representations (e.g. neural word embeddings) bring is a richer notion of similarity by which individual words keep their own identity (instead of being undistinguishible from the other words in the same class) and yet share learned attributes with other words with which they have some elements in common (but not all). This kind of richer notion of similarity makes generalization more specific and the representation not necessarily lossy, unlike with word classes.
10.8.2
Efficient Marginalization and Inference for Temporally Structured Outputs by Dynamic Programming
Many temporal modeling approaches can be cast in the following framework, which also includes hybrids of neural networks with HMMs and conditional random fields (CRFs), first introduced in Bottou et al. (1997); LeCun et al. (1998b) and later developed and applied with great success in Graves et al. (2006); Graves (2012) with the Connectionist Temporal Classification (CTC) approach, as well as in Do and Arti`eres (2010) and other more recent work (Farabet et al., 2013b; Deng et al., 2014). These ideas have been rediscovered in a simplified form (limiting the input-output relationship to a linear one) as CRFs (Lafferty et al., 2001), 318
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
i.e., undirected graphical models whose parameters are linear functions of input variables. In section 10.9 we consider in more detail the neural network hybrids and the “graph transformer” generalizations of the ideas presented below. All these approaches (with or without neural nets in the middle) concern the case where we have an input sequence (discrete or continuous-valued) {x t} and a symbolic output sequence {yt} (typically of the same length, although shorter output sequences can be handled by introducing “empty string” values in the output). Generalizations to non-sequential output structure have been introduced more recently (e.g. to condition the Markov Random Fields sometimes used to model structural dependencies in images (Stewart et al., 2007)), at the loss of exact inference (the dynamic programming methods described below). Optionally, one also considers a latent variable sequence {s t} that is also discrete and inference needs to be done over {st}, either via marginalization (summing over all possible values of the state sequence) or maximization (picking exactly or approximately the MAP sequence, with the largest probability). If the state variables st and the target variables yt have a 1-D Markov structure to their dependency, then computing likelihood, partition function and MAP values can all be done efficiently by exploiting dynamic programming to factorize the computation. On the other hand, if the state or output sequence dependencies are captured by an RNN, then there is no finite-order Markov property and no efficient and exact inference is generally possible. However, many reasonable approximations have been used in the past, such as with variants of the beam search algorithm (Lowerre, 1976). The idea of beam search is that one maintains a set of promising candidate paths that end at some time step t. For each additional time step, one considers extensions to t + 1 of each of these paths and then prunes those with the worse overall cumulative score (up to t + 1). The beam size is the number of candidates that are kept. See Section 10.9.1 for more details on beam search. The application of the principle of dynamic programming in these setups is the same as what is used in the Forward-Backward algorithm (detailed more around Eq. 10.20), for graphical models and HMMs (detailed more in Section 10.8.3) and the Viterbi algorithm detailed below (Eq. 10.22). For both of these algorithms, we are trying to sum (Forward-Backward algorithm) or maximize (Viterbi algorithm) over paths the probability or score associated with each path.
319
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Figure 10.20: Example of a temporally structured output graph, as can be found in CRFs, HMMs, and neural net hybrids. Each node corresponds to a particular value of an output random variable at a particular point in the output sequence (contrast with a graphical model representation, where each node corresponds to a random variable). A path from the source node to the sink node (e.g. red bold arrows) corresponds to an interpretation of the input as a sequence of output labels. The dynamic programming recursions that are used for computing likelihood (or conditional likelihood) or performing MAP inference (finding the best path) involve sums or maximizations over sub-paths ending at one of the particular interior nodes.
Let G be a directed acyclic graph whose paths correspond to the sequences that can be selected (for MAP) or summed over (marginalized for computing a likelihood), as illustrated in Figure 10.20. In the above example, let z t represent the choice variable (e.g., s t and yt in the above example), and each arc with score a corresponds to a particular value of z t in its Markov context. In the language of undirected graphical models, if a is the score associated with an arc from the node for z t−1 = j to the one for z t = i, then a is minus the energy of a term of the energy function associated with the event 1 zt−1 =j,zt=i and the associated information from the input x (e.g. some value of xt ). Hidden Markov models are based on the notion of Markov chain, which is covered in much more detail in Section 14.1. A Markov chain is a sequence of random variables z1 , . . . zT , and for our purposes the main property of a Markov chain chain of order 1 is that the current value of zt contains enough information about the previous values z1 , . . . z t−1 in order to predict the distribution of the next random variable, zt+1 . In our context, we can make the z’s conditioned on some x, the order 1 Markov property then means that P (zt | zt−1 , z t−2, . . . , z 1, x) = P (zt | z t−1 , x), where x is conditioning information (the input sequence). When 320
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
we consider a path in that space, i.e. a sequence of values, we draw a graph with a node for each discrete value of zt , and if it is possible to transition from zt−1 = j to z t = i we draw an arc between these two nodes. Hence, the total number of nodes in the graph would be equal to the length of the sequence, T , times the number of values of z t, n, and the number of arcs of the graph would be up to T n 2 (if every value of z t can follow every value of z t−1, although in practice the connectivity is often much smaller because not all transitions are typically feasible). A score a is computed for each arc (which may include some component that only depends on the source or only on the destination node), as a function of the conditioning information x. The inference or marginalization problems involve performing the following computations. For the marginalization task, we want to compute the sum over all complete paths (e.g. from source to sink) of the product along the path of the exponentiated scores associated with the arcs on that path: X Y m(G) = ea (10.17) path∈G a∈path
where the product is over all the arcs on a path (with score a), and the sum is over all the paths associated with complete sequences (from beginning to end of a sequence). m(G) may correspond to a likelihood, numerator or denominator of a probability. For example, P ({z t } ∈ Y | x) =
m(GY ) m(G)
(10.18)
where G Y is the subgraph of G which is restricted to sequences that are compatible with some target answer Y. For the inference task, we want to compute Y X π(G) = arg max ea = arg max a path∈G a∈path
v(G) =
max
path∈G
X
path∈G a∈path
a
a∈path
where π(G) is the most probable path and v(G) is its log-score or value, and again the set of paths considered includes all of those starting at the beginning and ending at the end the sequence. The principle of dynamic programming is to recursively compute intermediate quantities that can be reused efficiently so as to avoid actually going through an exponential number of computations, e.g., though the exponential number of paths to consider in the above sums or maxima. Note how it is already at play 321
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
in the underlying efficiency of back-propagation (or back-propagation through time), where gradients w.r.t. intermediate layers or time steps or nodes in a flow graph can be computed based on previously computed gradients (for later layers, time steps or nodes). Here it can be achieved by considering to restrictions of the graph to those paths that end at a node n, which we denote Gn . GnY indicates the additional restriction to subsequences that are compatible with the target sequence Y , i.e., with the beginning of the sequence Y .
Figure 10.21: Illustration of the recursive computation taking place for inference or marginalization by dynamic programming. See Figure 10.20. These recursions involve sums or maximizations over sub-paths ending at one of the particular interior nodes (red in the figure), each time only requiring to look up previously computed values at the predecessor nodes (green).
We can thus perform marginalization efficiently as follows, and illustrated in Figure 10.21. This is a generalization of the so-called Forward-Backward algorithm for HMMs X m(G) = m(Gn ) (10.19) n∈final(G)
where final(G) is the set of final nodes in the graph G, and we can recursively compute the node-restricted sum via X 0 m(Gn ) = (10.20) m(Gn )e an 0,n n 0 ∈pred(n)
where pred(n) is the set of predecessors of node n in the graph and am,n is the log-score associated with the arc from m to n. It is easy to see that expanding 322
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
the above recursion recovers the result of Eq. 10.17. Similarly, we can perform efficient MAP inference (also known as Viterbi decoding) as follows. v(G) = and
v(G n) =
v(Gn )
(10.21)
v(G m ) + am,n.
(10.22)
max
n∈final(G)
max
m∈pred(n)
To obtain the corresponding path, it is enough to keep track of the argmax associated with each of the above maximizations and trace back π(G) starting from the nodes in final(G). For example, the last element of π(G) is ∗
n
n ← arg max v(G ) n∈final(G)
and (recursively) the argmax node before n ∗ along the selected path is a new n ∗, n∗ ← arg max v(Gm ) + am,n∗ , m∈pred(n ∗ )
etc. Keeping track of these n ∗ along the way gives the selected path. Proving that these recursive computations yield the desired results is straightforward and left as an exercise.
10.8.3
HMMs
Hidden Markov Models (HMMs) are probabilistic models of sequences that were introduced in the 60’s (Baum and Petrie, 1966) along with the E-M algorithm (Section 19.2). They are very commonly used to model sequential structure, in particular having been since the mid 80’s and until recently the technological core of speech recognition systems (Rabiner and Juang, 1986; Rabiner, 1989). Just like RNNs, HMMs are dynamic Bayes nets (Koller and Friedman, 2009), i.e., the same parameters and graphical model structure are used for every time step. Compared to RNNs, what is particular to HMMs is that the latent variable associated with each time step (called the state) is discrete, with a separate set of parameters associated with each state value. We consider here the most common form of HMM, in which the Markov chain is of order 1, i.e., the state s t at time t, given the previous states, only depends on the previous state st−1 : P (st | st−1 , s t−2, . . . , s 1) = P (st | st−1 ), which we call the transition or state-to-state distribution. Generalizing to higherorder Markov chains is straightforward: for example, order-2 Markov chains can 323
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
be mapped to order-1 Markov chains by considering as order-1 “states” all the pairs (st = i, s t−1 = j). Given the state value, a generative probabilistic model of the visible variable xt is defined, that specifies how each observation xt in a sequence (x 1, x2 , . . . , xT ) can be generated, via a model P (x t | st). Two kinds of parameters are distinguished: those that define the transition distribution, which can be given by a matrix Aij = P (s t = i | s t−1 = j), and those that define the output model P (xt | s t). For example, if the data are discrete and x t is a symbol x t , then another matrix can be used to define the output (or emission) model: Bki = P (xt = k | st = i). Another common parametrization for P (xt | s t = i), in the case of continuous vector-valued x t , is the Gaussian mixture model, where we have a different mixture (with its own means, covariances and component probabilities) for each state s t = i. Alternatively, the means and covariances (or just variances) can be shared across states, and only the component probabilities are state-specific. The overall likelihood of an observed sequence is thus X Y P (x1 , x2 , . . . , xT ) = P (x t | st )P (st | s t−1). (10.23) s1,s2,...,sT
t
In the language established earlier in Section 10.8.2, we have a graph G with one node n per time step t and state value i, i.e., for s t = i, and one arc between each node n (for 1s t=i) and its predecessors m for 1 st−1 =j (when the transition probability is non-zero, i.e., P (st = i | st−1 = j) 6 = 0). Following Eq. 10.23, the log-score a m,n for the transition between m and n would then be a m,n = log P (xt | st = i) + log P (st = i | s t−1 = j). As explained in Section 10.8.2, this view gives us a dynamic programming algorithm for computing the likelihood (or the conditional likelihood given some constraints on the set of allowed paths), called the forward-backward or sumproduct algorithm, in time O(kN T ) where T is the sequence length, N is the number of states and k the average in-degree of each node. Although the likelihood is tractable and could be maximized by a gradientbased optimization method, HMMs are typically trained by the E-M algorithm (Section 19.2), which has been shown to converge rapidly (approaching the rate of Newton-like methods) in some conditions (if we view the HMM as a big mixture, 324
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
then the condition is for the final mixture components to be well-separated, i.e., have little overlap) (Xu and Jordan, 1996). At test time, the sequence of states that maximizes the joint likelihood P (x 1, x2 , . . . , xT , s1, s2 , . . . , sT) can also be obtained using a dynamic programming algorithm (called the Viterbi algorithm). This is a form of inference (see Section 13.5) that is called MAP (Maximum A Posteriori) inference because we want to find the most probable value of the unobserved state variables given the observed inputs. Using the same definitions as above (from Section 10.8.2) for the nodes and log-score of the graph G in which we search for the optimal path, the Viterbi algorithm corresponds to the recursion defined by Eq. 10.22. If the HMM is structured in such a way that states have a meaning associated with labels of interest, then from the MAP sequence one can read off the associated labels. When the number of states is very large (which happens for example with large-vocabulary speech recognition based on n-gram language models), even the efficient Viterbi algorithm becomes too expensive, and approximate search must be performed. A common family of search algorithms for HMMs is the beam search algorithm (Lowerre, 1976) (Section 10.9.1). More details about speech recognition are given in Section 12.3. An HMM can be used to associate a sequence of labels (y 1 , y2, . . . , yN ) with the input (x 1, x2, . . . , x T), where the output sequence is typically shorter than the input sequence, i.e., N < T . Knowledge of (y1, y 2 , . . . , yN ) constrains the set of compatible state sequences (s1, s2, . . . , sT ), and the generative conditional likelihood X Y P (x1, x2 , . . . , xT | y 1, y2 , . . . , yN ) = P (xt | st)P (s t | s t−1). s1 ,s2 ,...,s T∈S(y 1 ,y2 ,...,y N ) t
(10.24) can be computed using the same forward-backward technique, and its logarithm maximized during training, as discussed above. Various discriminative alternatives to the generative likelihood of Eq. 10.24 have been proposed (Brown, 1987; Bahl et al., 1987; Nadas et al., 1988; Juang and Katagiri, 1992; Bengio et al., 1992a; Bengio, 1993; Leprieur and Haffner, 1995; Bengio, 1999a), the simplest of which is simply P (y1 , y2, . . . , yN | x1 , x2 , . . . , xT ), which is obtained from Eq. 10.24 by Bayes rule, i.e., involving a normalization over all sequences, i.e., the unconstrained likelihood of Eq. 10.23: P (y1 , y2 , . . . , yN | x1 , x2 , . . . , x T) =
P (x 1 , x2 , . . . , xT | y1, y2 , . . . , yN )P (y 1, y2, . . . , yN ) . P (x1 , x 2, . . . , xT )
Both the numerator and denominator can be formulated in the framework of the previous section (Eqs. 10.18-10.20), where for the numerator we merge (add) the 325
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
log-scores coming from the structured output output model P (y1 , y2 , . . . , y N ) and from the input likelihood model P (x1 , x 2, . . . , x T | y 1 , y2, . . . , yN ). Again, each node of the graph corresponds to a state of the HMM at a particular time step t (which may or may not emit the next output symbol y i), associated with an input vector x t. Instead of making the relationship to the input the result of a simple parametric form (Gaussian or multinomial, typically), the scores can be computed by a neural network (or any other parametrized differential function). This gives rise to discriminative hybrids of search or graphical models with neural networks, discussed below, Section 10.9.
10.8.4
CRFs
Whereas HMMs are typically trained to maximize the probability of an input sequence x given a target sequence y and correspond to a directed graphical model, Conditional Random Fields (CRFs) (Lafferty et al., 2001) are undirected graphical models that are trained to maximize the joint probability of the target variables, given input variables, P (y | x). CRFs are special cases of the graph transformer model introduced in Bottou et al. (1997); LeCun et al. (1998b), where neural nets are replaced by affine transformations and there is a single graph involved. TODO: explain what a graph transformer actually is Many applications of CRFs involve sequences and the discussion here will be focused on this type of application, although applications to images (e.g. for image segmentation) are also common. Compared to other graphical models, another characteristic of CRFs is that there are no latent variables. The general equation for the probability distribution modeled by a CRF is basically the same as for fully visible (not latent variable) undirected graphical models, also known as Markov Random Fields (MRFs, see Section 13.2.2), except that the “potentials” (terms of the energy function) are parametrized functions of the input variables, and the likelihood of interest is the posterior probability P (y | x). As in many other MRFs, CRFs often have a particular connectivity structure in their graph, which allows one to perform learning or inference more efficiently. In particular, when dealing with sequences, the energy function typically only has terms that relate neighboring elements of the sequence of target variables. For example, the target variables could form a homogenous 4 Markov chain of order k (given the input variables). A typical linear CRF example with binary outputs 4
meaning that the same parameters are used for every time step
326
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
would have the following structure: k X X X X 1 P (y = y | x) = exp yt (b + wi xtj ) + yt yt−i (ui + vij x tj ) Z t j i=1 j
(10.25) where Z is the normalization constant, which is the sum over all y sequences of the numerator. In that case, the score marginalization framework of Section 10.8.2 and coming from Bottou et al. (1997); LeCun et al. (1998b) can be applied by making terms in the above exponential correspond to scores associated with nodes t of a graph G. If there were more than two output classes, more nodes per time step would be required but the principle would remain the same. A more general formulation for Markov chains of order d is the following: P (y = y | x) =
1 exp Z
d XX t
!
fd0 (yt , yt−1, . . . , y t−d 0, xt )
d0 =0
(10.26)
where f d0 computes a potential of the energy function, a parametrized function of both the past target values (up to y t−d0 ) and of the current input value xt . For example, as discussed below fd 0 could be the output of an arbitrary parametrized computation, such as a neural network. Although Z looks intractable, because of the Markov property of the model (order 1, in the example), it is again possible to exploit dynamic programming to compute Z efficiently, as per Eqs. 10.18-10.20). Again, the idea is to compute the sub-sum for sequences of length t ≤ T (where T is the length of a target sequence y), ending in each of the possible state values at t, e.g., y t = 1 and yt = 0 in the above example. For higher order Markov chains (say order d instead of 1) and a larger number of state values (say N instead of 2), the required sub-sums to keep track of are for each element in the cross-product of d − 1 state values, i.e., N d−1. For each of these elements, the new sub-sums for sequences of length t + 1 (for each of the N values at t + 1 and corresponding N max(0,d−2) past values for the past d − 2 time steps) can be obtained by only considering the sub-sums for the N d−1 joint state values for the last d − 1 time steps before t + 1. Following Eq. 10.22, the same kind of decomposition can be performed to efficiently find the MAP configuration of y’s given x, where instead of products (sums inside the exponential) and sums (for the outer sum of these exponentials, over different paths) we respectively have sums (corresponding to adding the sums inside the exponential) and maxima (across the different competing “previousstate” choices).
327
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Figure 10.22: Illustration of the stacking of graph transformers (right, c) as a generalization of the stacking of convolutional layers (middle, b) or of regular feedforward layers that transform fixed-size vectors (left, a). Figure reproduced with permission from the authors of Bottou et al. (1997). Quoting from that paper, (c) shows that “multilayer graph transformer networks are composed of trainable modules that operate on and produce graphs whose args carry numerical information”.
10.9
Combining Neural Networks and Search
The idea of combining neural networks with HMMs or related search or alignmentbased components (such as graph transformers) for speech and handwriting recognition dates from the early days of research on multi-layer neural networks (Bourlard and Wellekens, 1990; Bottou et al., 1990; Bengio, 1991; Bottou, 1991; Haffner et al., 1991; Bengio et al., 1992a; Matan et al., 1992; Bourlard and Morgan, 1993; Bengio et al., 1995; Bengio and Frasconi, 1996; Baldi and Brunak, 1998) – and see more references in Bengio (1999b). See also 12.5 for combining recurrent and other deep learners with generative models such as CRFs, GSNs or RBMs. The principle of efficient marginalization and inference for temporally structured outputs by exploiting dynamic programming (Sec. 10.8.2) can be applied not just when the log-scores of Eqs. 10.17 and 10.19 are parameters or linear functions of the input, but also when they are learned non-linear functions of the input, e.g., via a neural network transformation, as was first done in Bottou et al. (1997); LeCun et al. (1998b). These papers additionally introduced the powerful idea of learned graph transformers, illustrated in Figure 10.22. In this context, a graph transformer is a machine that can map a directed acyclic graph G in to another graph G out . Both input and output graphs have paths that represent 328
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Figure 10.23: Illustration of the input and output of a simple graph transformer that maps a singleton graph corresponding to an input image to a graph representing hypothesized segmentation hypotheses. Reproduced with permission from the authors of Bottou et al. (1997).
hypotheses about the observed data. For example, in the above papers, and as illustrated in Figure 10.23, a segmentation graph transformer takes a singleton input graph (the image x) and outputs a graph representing segmentation hypotheses (regarding sequences of segments that could each contain a character in the image). Such a graph transformer could be used as one layer of a graph transformer network for handwriting recognition or document analysis for reading amounts on checks, as illustrated respectively in Figures 10.24 and 10.25. For example, after the segmentation graph transformer, a recognition graph transformer could expand each node of the segmentation graph into a subgraph whose arcs correspond to different interpretations of the segment (which character is present in the segment?). Then, a dictionary graph transformer takes the recognition graph and expands it further by considering only the sequences of characters that are compatible with sequences of words in the language of interest. Finally, a language-model graph transformer expands sequences of word hypotheses so as to include multiple words in the state (context) and weigh the arcs according to the language model next-word log-probabilities. Each of these transformations is parametrized and takes real-valued scores on the arcs of the input graph into real-valued scores on the arcs of the output graph. These transformations can be parametrized and learned by gradient-based optimization over the whole series of graph transformers.
10.9.1
Approximate Search
Unfortunately, as in the above example, when the number of nodes of the graph becomes very large (e.g., considering all previous n words to condition the logprobability of the next one, for n large), even dynamic programming (whose computation scales with the number of arcs) is too slow for practical applications such as speech recognition or machine translation. A common example is when a 329
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Figure 10.24: Illustration of the graph transformer network that has been used for finding the best segmentation of a handwritten word, for handwriting recognition. Reproduced with permission from Bottou et al. (1997).
330
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
Figure 10.25: Illustration of the graph transformer network that has been used for reading amounts on checks, starting from the single graph containing the image of the graph to the recognized sequences of characters corresponding to the amount on the graph, with currency and other recognized marks. Note how the grammar graph transformer composes the grammar graph (allowed sequences of characters) and the recognition graph (with character hypotheses associated with specific input segments, on the arcs) into an interpretation graph that only contains the recognition graph paths that are compatible with the grammar. Reproduced with permission from Bottou et al. (1997).
331
CHAPTER 10. SEQUENCE MODELING: RECURRENT AND RECURSIVE NETS
recurrent neural network is used to compute the arcs log-score, e.g., as in neural language models (Section 12.4). Since the prediction at step t depends on all t − 1 previous choices, the number of states (nodes of the search graph G) grows exponentially with the length of the sequence. In that case, one has to resort to approximate search. Beam Search In the case of sequential structures as discussed in this chapter, a common family of approximate search algorithms is the beam search (Lowerre, 1976). • Break the nodes of the graph into g groups containing only “comparable nodes”, e.g., the group of nodes n for which the maximum length of the paths ending at n is exactly t. • Process these groups of nodes sequentially, keeping only at each step t a selected subset St of the nodes (the “beam”), chosen based on the subset St−1 . Each node in St is associated with a score vˆ(Gn ) that represents an approximation (a lower bound) on the maximum total log-score of the path ending at the node, v(Gn ) (defined in Eq. 10.22, Viterbi decoding). • St is obtained by following all the arcs from the nodes in St−1, and sorting all the resulting group t nodes n according to their estimated (lower bound) score 0 vˆ(G n ) = vˆ(G n ) + an 0,n , max n0 ∈S t−1andn 0∈pred(n)
while keeping track of the argmax in order to trace back the estimated best path. Only the k nodes with the highest log-score are kept and stored in St, and k is called the beam width. • The estimated best final node can be read off from max n∈ST vˆ(G n ) and the estimated best path from the associated argmax choices made along the way, just like in the Viterbi algorithm. One problem with beam search is that the beam often ends up lacking in diversity, making the approximation poor. For example, imagine that we have two “types” of solutions, but that each type has exponentially many variants (as a function of t), due, e.g., to small independent variations in ways in which the type can be expressed at each time step t. Then, even though the two types may have close best log-score up to time t, the beam could be dominated by the one that wins slightly, eliminating the other type from the search, although later time steps might reveal that the second type was actually the best one. 332
Part III
Deep Learning Research
370
This part of the book describes the more ambitious and advanced approaches to deep learning, currently pursued by the research community. In the previous parts of the book, we have shown how to solve supervised learning problems—how to learn to map one vector to another, given enough examples of the mapping. Not all problems we might want to solve fall into this category. We may wish to generate new examples, or determine how likely some point is, or handle missing values and take advantage of a large set of unlabeled examples or examples from related tasks. Many deep learning algorithms have been designed to tackle such unsupervised learning problems, but none have truly solved the problem in the same way that deep learning has largely solved the supervised learning problem for a wide variety of tasks. In this part of the book, we describe the existing approaches to unsupervised learning and some of the popular thought about how we can make progress in this field. Another shortcoming of the current state of the art for industrial applications is that our learning algorithms require large amounts of supervised data to achieve good accuracy. In this part of the book, we discuss some of the speculative approaches to reducing the amount of labeled data necessary for existing models to work well. This section is the most important for a researcher—someone who wants to understand the breadth of perspectives that have been brought to the field of deep learning, and push the field forward towards true artificial intelligence.
371
Chapter 13
Structured Probabilistic Models for Deep Learning Deep learning draws upon many modeling formalisms that researchers can use to guide their design efforts and describe their algorithms. One of these formalisms is the idea of structured probabilistic models. We have already discussed structured probabilistic models briefly in Chapter 3.14. That brief presentation was sufficient to understand how to use structured probabilistic models as a language to describe some of the algorithms in part II of this book. Now, in part III, structured probabilistic models are a key ingredient of many of the most important research topics in deep learning. In order to prepare to discuss these research ideas, this chapter describes structured probabilistic models in much greater detail. This chapter is intended to be self-contained; the reader does not need to review the earlier introduction before continuing with this chapter. A structured probabilistic model is a way of describing a probability distribution, using a graph to describe which random variables in the probability distribution interact with each other directly. Here we use “graph” in the graph theory sense–a set of vertices connected to one another by a set of edges. Because the structure of the model is defined by a graph, these models are often also referred to as graphical models. The graphical models research community is large and has developed many different models, training algorithms, and inference algorithms. In this chapter, we provide basic background on some of the most central ideas of graphical models, with an emphasis on the concepts that have proven most useful to the deep learning research community. If you already have a strong background in graphical models, you may wish to skip most of this chapter. However, even a graphical model expert may benefit from reading the final section of this chapter, section 13.6, in which we highlight some of the unique ways that graphical 372
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
models are used for deep learning algorithms. Deep learning practitioners tend to use very different model structures, learning algorithms, and inference procedures than are commonly used by the rest of the graphical models research community. In this chapter, we identify these differences in preferences and explain the reasons for them. In this chapter we first describe the challenges of building large-scale probabilistic models in section 13.1. Next, we describe how to use a graph to describe the structure of a probability distribution in section 13.2. We then revisit the challenges we described in section 13.1 and show how the structured approach to probabilistic modeling can overcome these challenges in section 13.3. One of the major difficulties in graphical modeling is understanding which variables need to be able to interact directly, i.e., which graph structures are most suitable for a given problem. We outline two approaches to resolving this difficulty by learning about the dependencies in section 13.4. Finally, we close with a discussion of the unique emphasis that deep learning practitioners place on specific approaches to graphical modeling in section 13.6.
13.1
The Challenge of Unstructured Modeling
The goal of deep learning is to scale machine learning to the kinds of challenges needed to solve artificial intelligence. This means being able to understand highdimensional data with rich structure. For example, we would like AI algorithms to be able to understand natural images1 , audio waveforms representing speech, and documents containing multiple words and punctuation characters. Classification algortihms can take such a rich high-dimensional input and summarize it with a categorical label—what object is in a photo, what word is spoken in a recording, what topic a document is about. The process of classification discards most of the information in the input and produces on a single output (or a probability distribution over values of that single output). The classifier is also often able to ignore many parts of the input. For example, when recognizing an object in a photo, it is usually possible to ignore the background of the photo. It is possible to ask probabilistic models to do many other tasks. These tasks are often more expensive than classification. Some of them require producing multiple output values. Most require a complete understanding of the entire structure of the input, with no option to ignore sections of it. These tasks include • Density estimation: given an input x, the machine learning system returns an estimate of p(x). This requires only a single output, but it does require 1
A natural image is an image that might captured by a camera in a reasonably ordinary environment, as opposed to synthetically rendered images, screenshots of web pages, etc. 373
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
a complete understanding of the entire input. If even one element of the vector is unusual, the system must assign it a low probability. x, the machine • Denoising: given a damaged or incorrectly observed input ˜ learning system returns an estimate of the original or correct x. For example, the machine learning system might be asked to remove dust or scratches from an old photograph. This requires multiple outputs (every element of the estimated clean example x) and an understanding of the entire input (since even one damaged area will still reveal the final estimate as being damaged). • Missing value imputation: given the observations of some elements of x, the model is asked to return estimates of or a probability distribution over some or all of the unobserved elements of x. This requires multiple outputs, and because the model could be asked to restore any of the elements of x, it must understand the entire input. • Sampling: the model generates new samples from the distribution p(x). Applications include speech synthesis, i.e. producing new waveforms thatsound like natural human speech. This requires multiple output values and a good model of the entire input. If the samples have even one element drawn from the wrong distribution, then the sampling process is wrong. For an example of the sampling tasks on small natural images, see Fig. 13.1. Modeling a rich distribution over thousands or millions of random variables is a challenging task, both computationally and statistically. Suppose we only wanted to model binary variables. This is the simplest possible case, and yet already it seems overwhelming. For a small, 32 × 32 pixel color (RGB) image, there are 23072 possible binary images of this form. This number is over 10800 times larger than the estimated number of atoms in the universe. In general, if we wish to model a distribution over a random vector x containing n discrete variables capable of taking on k values each, then the naive approach of representing P (x) by storing a lookup table with one probability value per possible outcome requires k n parameters! This is not feasible for several reasons: • Memory: the cost of storing the representation : For all but very small values of n and k, representing the distribution as a table will require too many values to store. • Statistical efficiency: As the number of parameters in a model increases, so does the amount of training examples needed to choose the values of those parameters using a statistical estimator. Because the table-based model has 374
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
Figure 13.1: Probabilistic modeling of natural images. Top: Example 32 × 32 pixel color images from the CIFAR-10 dataset (Krizhevsky and Hinton, 2009). Bottom: Samples drawn from a structured probabilistic model trained on this dataset. Each sample appears at the same position in the grid as the training example that is closest to it in Euclidean space. This comparison allows us to see that the model is truly synthesizing new images, rather than memorizing the training data. Contrast of both sets of images has been adjusted for display. Figure reproduced with permission from (Courville et al., 2011).
375
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
an astronomical number of parameters, it will require an astronomically large training set to fit accurately. Any such model will overfit the training set very badly. • Runtime: the cost of inference: Suppose we want to perform an inference task where we use our model of the joint distribution P (x) to compute some other distribution, such as the marignal distribution P (x1) or the conditional distribution P (x2 | x1). Computing these distributions will require summing across the entire table, so the runtime of these operations is as high as the intractable memory cost of storing the model. • Runtime: the cost of sampling: Likewise, suppose we want to draw a sample from the model. The naive way to do this is to sample some value u ∼ U (0, 1), then iterate through the table adding up the probability values until they exceed u and return the outcome whose probability value was added last. This requires reading through the whole table in the worst case, so it has the same exponential cost as the other operations. The problem with the table-based approach is that we are explicitly modeling every possible kind of interaction between every possible subset of variables. The probability distributions we encounter in real tasks are much simpler than this. Usually, most variables influence each other only indirectly. For example, consider modeling the finishing times of a team in a relay race. Suppose the team consists of three runners, Alice, Bob, and Carol. At the start of the race, Alice carries a baton and begins running around a track. After completing her lap around the track, she hands the baton to Bob. Bob then runs his own lap and hands the baton to Carol, who runs the final lap. We can model each of their finishing times as a continuous random variable. Alice’s finishing time does not depend on anyone else’s, since she goes first. Bob’s finishing time depends on Alice’s, because Bob does not have the opportunity to start his lap until Alice has completed hers. If Alice finishes faster, Bob will finish faster, all else being equal. Finally, Carol’s finishing time depends on both her teammates. If Alice is slow, Bob will probably finish late too, and Carol will have quite a late starting time and thus is likely to have a late finishing time as well. However, Carol’s finishing time depends only indirectly on Alice’s finishing time via Bob’s. If we already know Bob’s finishing time, we won’t be able to estimate Carol’s finishing time better by finding out what Alice’s finishing time was. This means we can model the relay race using only two interactions: Alice’s effect on Bob, and Bob’s effect on Carol. We can omit the third, indirect interaction between Alice and Carol from our model. Structured probabilistic models provide a formal framework for modeling only direct interactions between random variables. This allows the models to have 376
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
significantly fewer parameters which can in turn be estimated reliably from less data. These smaller models also have dramatically reduced computation cost in terms of storing the model, performing inference in the model, and drawing samples from the model.
13.2
Using Graphs to Describe Model Structure
Structured probabilistc models use graphs (in the graph theory sense of “nodes” or “vertices” connected by edges) to represent interactions between random variables. Each node represents a random variable. Each edge represents a direct interaction. These direct interactions imply other, indirect interactions, but only the direct interactions need to be explicitly modeled. There is more than one way to describe the interactions in a probability distribution using a graph. In the following sections we describe some of the most popular and useful approaches.
13.2.1
Directed Models
One kind of structured probabilistic model is the directed graphical model otherwise known as the belief network or Bayesian network 2 (Pearl, 1985). Directed graphical models are called “directed” because their edges are directed, that is, they point from one vertex to another. This direction is represented in the drawing with an arrow. The direction of the arrow indicates which variable’s probability distribution is defined in terms of the other’s. Drawing an arrow from a to b means that we define the probability distribution over b via a conditional distribution, with a as one of the variables on the right side of the conditioning bar. In other words, the distribution over b depends on the value of a. Let’s continue with the relay race example from Section 13.1. Suppose we name Alice’s finishing time t0 , Bob’s finishing time t1 , and Carol’s finishing time t2 . As we saw earlier, our estimate of t1 depends on t0. Our estimate of t2 depends directly on t1 but only indirectly on t0 . We can draw this relationship in a directed graphical model, illustrated in Fig. 13.2. Formally, a directed graphical model defined on variables x is defined by a directed acyclic graph G whose vertices are the random variables in the model, and a set of local conditional probability distributions p(x i | P a G (xi )) where P aG (xi) 2
Judea Pearl suggested using the term Bayes Network when one wishes to “emphasize the judgmental” nature of the values computed by the network, i.e. to highlight that they usually represent degrees of belief rather than frequencies of events.
377
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
Alice t0
Bob t1
Carol t2
Figure 13.2: A directed graphical model depicting the relay race example. Alice’s finishing time t 0 influences Bob’s finishing time t1 , because Bob does not get to start running until Alice finishes. Likewise, Carol only gets to start running after Bob finishes, so Bob’s finishing time t1 influences Carol’s finishing time t2 .
gives the parents of x i in G. The probability distribution over x is given by p(x) = Πi p(xi | P aG (xi )). In our relay race example, this means that, using the graph drawn in Fig. 13.2, p(t0 , t1, t2 ) = p(t0 )p(t1 | t0 )p(t 2 | t 1). This is our first time seeing a structured probabilistic model in action. We can examine the cost of using it, in order to observe how structured modeling has many advantages relative to unstructured modeling. Suppose we represented time by discretizing time ranging from minute 0 to minute 10 into 6 second chunks. This would make t 0, t1 , and t 2 each be discrete variables with 100 possible values. If we attempted to represent p(t0 , t1 , t2 ) with a table, it would need to store 999,999 values (100 values of t0 × 100 values of t1 × 100 values of t2 , minus 1, since the probability of one of the configurations is made redundant by the constraint that the sum of the probabilities be 1). If instead, we only make a table for each of the conditional probability distributions, then the distribution over t0 requires 99 values, the table defining t 1 given t0 requires 9900 values, and so does the table defining t2 and t 1. This comes to a total of 19,899 values. This means that using the directed graphical model reduced our number of parameters by a factor of more than 50! In general, to model n discrete variables each having k values, the cost of the single table approach scales like O(k n ), as we’ve observed before. Now suppose we build a directed graphical model over these variables. If m is the maximum number of variables appearing (on either side of the conditioning bar) in a single conditional probability distribution, then the cost of the tables for the directed model scales like O(k m ). As long as we can design a model such that m 0. A convenient way to enforce this to use an energybased model (EBM) where p˜(x) = exp(−E(x))
(13.1)
and E(x) is known as the energy function. Because exp(z) is positive for all z, this guarantees that no energy function will result in a probability of zero for any state x. Being completely free to choose the energy function makes learning simpler. If we learned the clique potentials directly, we would need to use constrained optimization, and we would need to arbitrarily impose some specific minimal probability value. By learning the energy function, we can use unconstrained optimization 5, and the probabilities in the model can approach arbitrarily close to zero but never reach it. Any distribution of the form given by equation 13.1 is an example of a Boltzmann distribution. For this reason, many energy-based models are called Boltzmann machines. There is no accepted guideline for when to call a model an energy-based model and when to call it a Boltzmann machines. The term Boltzmann machine was first introduced to describe a model with exclusively binary variables, but today many models such as the mean-covariance restricted Boltzmann machine incorporate real-valued variables as well. 5
For some models, we may still need to use constrained optimization to make sure Z exists.
383
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
A
B
C
D
E
F
Figure 13.5: This graph implies that E(a, b, c, d, e, f) can be written as Ea,b (a, b) + E b,c(b, c) + E a,d(a, d) + E b,e(b, e) + E e,f(e, f) for an appropriate choice of the per-clique energy functions. Note that we can obtain the φ functions in Fig. 13.4 by setting each φ to the exp of the corresponding negative energy, e.g., φa,b (a, b) = exp (−E(a, b)).
Cliques in an undirected graph correspond to factors of the unnormalized probability function. Because exp(a) exp(b) = exp(a+b), this means that different cliques in the undirected graph correspond to the different terms of the energy function. In other words, an energy-based model is just a special kind of Markov network: the exponentiation makes each term in the energy function correspond to a factor for a different clique. See Fig. 13.5 for an example of how to read the form of the energy function from an undirected graph structure. One part of the definition of an energy-based model serves no functional purpose from a machine learning point of view: the − sign in Eq. 13.1. This − sign could be incorporated into the definition of E, or for many functions E the learning algorithm could simply learn parameters with opposite sign. The − sign is present primarily to preserve compatibility between the machine learning literature and the physics literature. Many advances in probabilistic modeling were originally developed by statistical physicists, for whom E refers to actual, physical energy and does not have arbitrary sign. Terminology such as “energy” and “partition function” remains associated with these techniques, even though their mathematical applicability is broader than the physics context in which they were developed. Some machine learning researchers (e.g., Smolensky (1986), who referred to negative energy as harmony) have chosen to emit the negation, but this is not the standard convention.
13.2.5
Separation and D-Separation
The edges in a graphical model tell us which variables directly interact. We often need to know which variables indirectly interact. Some of these indirect interactions can be enabled or disabled by observing other variables. More formally, we would like to know which subsets of variables are conditionally independent from each other, given the values of other subsets of variables. 384
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
A
S
B
A
S
(a)
B
(b)
Figure 13.6: a) The path between random variable a and random variable b through s is active, because s is not observed. This means that a and b are not separated. b) Here s is shaded in, to indicate that it is observed. Because the only path between ra and b is through s, and that path is inactive, we can conclude that a and b are separated given s.
A B
C
D Figure 13.7: An example of reading separation properties from an undirected graph. Here b is shaded to indicate that it is observed. Because observing b blocks the only path from a to c, we say that a and c are separated from each other given b. The observation of b also blocks one path between a and d, but there is a second, active path between them. Therefore, a and d are not separated given b.
Identifying the conditional independences in a graph is very simple in the case of undirected models. In this case, conditional independence implied by the graph is called separation. We say that a set of variables A is separated from another set of variables B given a third set of variables S if the graph structure implies that A is independent from B given S. If two variables a and b are connected by a path involving only unobserved variables, then those variables are not separated. If no path exists between them, or all paths contain an observed variable, then they are separated. We refer to paths involving only unobserved variables as “active” and paths including an observed variable as “inactive.” When we draw a graph, we can indicate observed variables by shading them in. See Fig. 13.6 for a depiction of how active and inactive paths in an undirected look when drawn in this way. See Fig. 13.7 for an example of reading separation from an undirected graph. Similar concepts apply to directed models, except that in the context of directed models, these concepts are referred to as d-separation. The “d” stands for “dependence.” D-separation for directed graphs is defined the same as separation for undirected graphs: We say that a set of variables A is d-separated from another set of variables B given a third set of variables S if the graph structure 385
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
implies that A is independent from B given S. As with undirected models, we can examine the independences implied by the graph by looking at what active paths exist in the graph. As before, two variables are dependent if there is an active path between them, and d-separated if no such path exists. In directed nets, determining whether a path is active is somewhat more complicated. See Fig. 13.8 for a guide to identifying active paths in a directed model. See Fig. 13.9 for an example of reading some properties from a graph. It is important to remember that separation and d-separation tell us only about those conditional independences that are implied by the graph. There is no requirement that the graph imply all independences that are present. In particular, it is always legitimate to use the complete graph (the graph with all possible edges) to represent any distribution. In fact, some distributions contain independences that are not possible to represent with existing graphical notation. Context-specific independences are independences that are present dependent on the value of some variables in the network. For example, consider a model of three binary variables, a, b, and c. Suppose that when a is 0, b and c are independent, but when a is 1, b is deterministically equal to c. Encoding the behavior when a = 1 requires an edge connecting b and c. The graph then fails to indicate that b and c are independent when a = 0. In general, a graph will never imply that an independence exists when it does not. However, a graph may fail to encode an independence.
13.2.6
Converting Between Undirected and Directed Graphs
In common parlance, we often refer to certain model classes as being undirected or directed. For example, we typically refer to RBMs as undirected and sparse coding as directed. This way of speaking can be somewhat leading, because no probabilistic model is inherently directed or undirected. Instead, some models are most easily described using a directed graph, or most easily described using an undirected graph. Ever probability distribution can be represented by either a directed model or by an undirected model. In the worst case, one can always represent any distribution by using a “complete graph.” In the case of a directed model, the complete graph is any directed acyclic graph where we impose some ordering on the random variables, and each variable has all other variables that precede it in the ordering as its ancestors in the graph. For an undirected model, the complete graph is simply a graph containing a single clique encompassing all of the variables. Of course, the utility of a graphical model is that the graph implies that some variables do not interact directly. The complete graph is not very useful because 386
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
A
S
B A
A
S
B
B
(a)
A
S
(b)
A
B
S
S
C
(c)
(d)
B
Figure 13.8: All of the kinds of active paths of length two that can exist between random variables a and rb. a) Any path with arrows proceeding directly from a to b or vice versa. This kind of path becomes blocked if s is observed. We have already seen this kind of path in the relay race example. b) a and b are connected by a common cause s. For example, suppose s is a variable indicating whether or not there is a hurricane and a and b measure the wind speed at two different nearby weather monitoring outposts. If we observe very high winds at station a, we might expect to also see high winds at b. This kind of path can be blocked by observing s. If we already know there is a hurricane, we expect to see high winds at b, regardless of what is observed at a. A lower than expected wind at a (for a hurricane) would not change our expectation of winds at b (knowing there is a hurricane). However, if s is not observed, then a and b are dependent, i.e., the path is inactive. c) a and b are both parents of s. This is called a V-structure or the collider case, and it causes a and a to be related by the explaining away effect. In this case, the path is actually active when s is observed. For example, suppose s is a variable indicating that your colleague is not at work. The variable a represents her being sick, while b represents her being on vacation. If you observe that she is not at work, you can presume she is probably sick or on vacation, but it’s not especially likely that both have happened at the same time. If you find out that she is on vacation, this fact is sufficient to explain her absence, and you can infer that she is probably not also sick. d) The explaining away effect happens even if any descendant of s is observed! For example, suppose that c is a variable representing whether you have received a report from your colleague. If you notice that you have not received the report, this increases your estimate of the probability that she is not at work today, which in turn makes it more likely that she is either sick or on vacation. The only way to block a path through a V-structure is to observe none of the descendants of the shared child. 387
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
A
B C
D
E
Figure 13.9
From this graph, we can read out several d-separation properties. Examples include: • a and b are d-separated given the empty set. • a and e are d-separated given c. • d and e are d-separated given c. We can also see that some variables are no longer d-separated when we observe some variables: • a and b are not d-separated given c. • a and b are not d-separated given d.
388
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
it does not imply any independences. TODO figure complete graph When we represent a probability distribution with a graph, we want to choose a graph that implies as many independences as possible, without implying any independences that do not actually exist. From this point of view, some distributions can be represented more efficiently using directed models, while other distributions can be represented more efficiently using undirected models. In other words, directed models can encode some independences that undirected models cannot encode, and vice versa. Directed models are able to use one specific kind of substructure that undirected models cannot represent perfectly. This substructure is called an immorality. The structure occurs when two random variables a and b are both parents of a third random variable c, and there is no edge directly connecting a and b in either direction. (The name “immorality” may seem strange; it was coined in the graphical models literature as a joke about unmarried parents) To convert a directed model with graph D into an undirected model, we need to create a new graph U . For every pair of variables x and y, we add an undirected edge connecting x and y to U if there is a directed edge (in either direction) connecting x and y in D or if x and y are both parents in D of a third variable z. The resulting U is known as a moralized graph. See Fig. 13.10 for examples of converting directed models to undirected models via moralization. Likewise, undirected models can include substructures that no directed model can represent perfectly. Specifically, a directed graph D cannot capture all of the conditional independences implied by an undirected graph U if U contains a loop of length greater than three, unless that loop also contains a chord. A loop is a sequence of variables connected by undirected edges, with the last variable in the sequence connected back to the first variable in the sequence. A chord is a connection between any two non-consecutive variables in this sequence. If U has loops of length four or greater and does not have chords for these loops, we must add the chords before we can convert it to a directed model. Adding these chords discards some of the independence information that was encoded in U . The graph formed by adding chords to U is known as a chordal or triangulated graph, because all the loops can now be described in terms of smaller, triangular loops. To build a directed graph D from the chordal graph, we need to also assign directions to the edges. When doing so, we must not create a directed cycle in D, or the result does not define a valid directed probabilistic model. One way to assign directions to the edges in D is to impose an ordering on the random variables, then point each edge from the node that comes earlier in the ordering to the node that comes later in the ordering. TODO point to fig IG HERE TODO: started this above, need to scrap some some BNs encode indepen389
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
a a
b
b
a b
h2
h3
v1
v2
v3
h1
h2
h3
v1
v2
v3
c
c
a
h1
b
c
c
Figure 13.10: Examples of converting directed models to undirected models by constructing moralized graphs. Left) This simple chain can be converted to a moralized graph merely by replacing its directed edges with undirected edges. The resulting undirected model implies exactly the same set of independences and conditional independences. Center) This graph is the simplest directed model that cannot be converted to an undirected model without losing some independences. This graph consists entirely of a single immorality. Because a and b are parents of c, they are connected by an active path when c is observed. To capture this dependence, the undirected model must include a clique encompassing all three variables. This clique fails to encode the fact that a⊥b. Right) In general, moralization may add many edges to the graph, thus losing many implied independences. For example, this sparse coding graph requires adding moralizing edges between every pair of latent variables, thus introducing a quadratic number of new direct dependences.
390
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
dences that MNs can’t encode, and vice versa example of BN that an MN can’t encode: A and B are parents of C A is d-separated from B given the empty set The Markov net requires a clique over A, B, and C in order to capture the active path from A to B when C is observed This clique means that the graph cannot imply A is separated from B given the empty set example of a MN that a BN can’t encode: A, B, C, D connected in a loop BN cannot have both A d-sep D given B, C and B d-sep C given A, D In many cases, we may want to convert an undirected model to a directed model, or vice versa. To do so, we choose the graph in the new format that implies as many independences as possible, while not implying any independences that were not implied by the original graph. To convert a directed model D to an undirected model U, we re TODO: conversion between directed and undirected models
13.2.7
Marginalizing Variables out of a Graph
TODO: marginalizing variables out of a graph
13.2.8
Factor Graphs
Factor graphs are another way of drawing undirected models that resolve an ambiguity in the graphical representation of standard undirected model syntax. In an undirected model, the scope of every φ function must be a subset of some clique in the graph. However, it is not necessary that there exist any φ whose scope contains the entirety of every clique. Factor graphs explicitly represent the scope of each φ function. Specifically, a factor graph is a graphical representation of an undirected model that consists of a bipartite undirected graph. Some of the nodes are drawn as circles. These nodes correspond to random variables as in a standard undirected model. The rest of the nodes are drawn as squares. These nodes correspond to the factors φ of the unnormalized probability distribution. Variables and factors may be connected with undirected edges. A variable and a factor are connected in the graph if and only if the variable is one of the arguments to the factor in the unnormalized probability distribution. No factor may be connected to another factor in the graph, nor can a variable be connected to a variable. See Fig. 13.11 for an example of how factor graphs can resolve ambiguity in the interpretation of undirected networks.
13.3
TODO– note that we have already shown that some things are cheaper in the sections where we introduce the modeling syntax 391
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
A A
B
A
f2
f1
B
f1 C (a)
f3
C
C
(b)
(c)
Figure 13.11: An example of how a factor graph can resolve ambiguity in the interpretation of undirected networks. a) An undirected network with a clique involving three variables a, b, and c. b) A factor graph corresponding to the same undirected model. This factor graph has one factor over all three variables. c) Another valid factor graph for the same undirected model. This factor graph has three factors, each over only two variables. Note that representation, inference, and learning are all asymptotically cheaper in (c) compared to (b), even though both require the same undirected graph to represent. TODO: make sure figure respects random variable notation
TODO: revisit each of the three challenges from sec:unstructured TODO: hammer point that graphical models convey information by leaving edges out TODO: need to show reduced cost of sampling, but first reader needs to know about ancestral and gibbs sampling.... TODO: benefit of separating representation from learning and inference
13.4
We consider here two types of random variables: observed or “visible” variables v and latent or “hidden” variables h. The observed variables v correspond to the variables actually provided in the data set during training. h consists of variables that are introduced to the model in order to help it explain the structure in v. Generally the exact semantics of h depend on the model parameters and are created by the learning algorithm. The motivation for this is twofold.
13.4.1
B
Latent Variables Versus Structure Learning
Often the different elements of v are highly dependent on each other. A good model of v which did not contain any latent variables would need to have very large numbers of parents per node in a Bayesian network or very large cliques in a Markov network. Just representing these higher order interactions is costly–both in a computational sense, because the number of parameters that must be stored 392
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
in memory scales exponentially with the number of members in a clique, but also in a statistical sense, because this exponential number of parameters requires a wealth of data to estimate accurately. There is also the problem of learning which variables need to be in such large cliques. An entire field of machine learning called structure learning is devoted to this problem . For a good reference on structure learning, see (Koller and Friedman, 2009). Most structure learning techniques are a form of greedy search. A structure is proposed, a model with that structure is trained, then given a score. The score rewards high training set accuracy and penalizes model complexity. Candidate structures with a small number of edges added or removed are then proposed as the next step of the search, and the search proceeds to a new structure that is expected to increase the score. Using latent variables instead of adaptive structure avoids the need to perform discrete searches and multiple rounds of training. A fixed structure over visible and hidden variables can use direct interactions between visible and hidden units to impose indirect interactions between visible units. Using simple parameter learning techniques we can learn a model with a fixed structure that imputes the right structure on the marginal p(v).
13.4.2
Latent Variables for Feature Learning
Another advantage of using latent variables is that they often develop useful semantics. As discussed in section 3.10.5, the mixture of Gaussians model learns a latent variable that corresponds to which category of examples the input was drawn from. This means that the latent variable in a mixture of Gaussians model can be used to do classification. In Chapter 15 we saw how simple probabilistic models like sparse coding learn latent variables that can be used as input features for a classifier, or as coordinates along a manifold. Other models can be used in this same way, but deeper models and models with different kids of interactions can create even richer descriptions of the input. Most of the approaches mentioned in sec. 13.4.2 accomplish feature learning by learning latent variables. Often, given some model of v and h, it turns out that E[h | v] TODO: uh-oh, is there a collision between set notation and expectation notation? or argmax hp(h, v) is a good feature mapping for v. TODO: appropriate links to Monte Carlo methods chapter spun off from here
393
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
13.5
Inference and Approximate Inference Over Latent Variables
As soon as we introduce latent variables in a graphical model, this raises the question: how to choose values of the latent variables h given values of the visible variables x? This is what we call inference, in particular inference over the latent variables. The general question of inference is to guess some variables given others. TODO: inference has definitely been introduced above... TODO: mention loopy BP, show how it is very expensive for DBMs TODO: briefly explain what variational inference is and reference approximate inference chapter
13.5.1
Reparametrization Trick
Sometimes, in order to estimate the stochastic gradient of an expected loss over some random variable h, with respect to parameters that influence h, we would like to compute gradients through h, i.e., on the parameters that influenced the probability distribution from which h was sampled. If h is continuous-valued, this is generally possible by using the reparametrization trick, i.e., rewriting h ∼ p(h | θ)
(13.2)
h = f (θ, η)
(13.3)
as where η is some independent noise source of the appropriate dimension with density p(η), and f is a continuous (differentiable almost everywhere) function. Basically, the reparametrization trick is the idea that if the random variable to be integrated over is continuous, we can back-propagate through the process that gave rise to it in order to figure how to change that process. For example, let us suppose we want to estimate the expected gradient Z ∂ L(h)p(h | θ)dh (13.4) ∂θ where the parameters θ influences the random variable h which in term influence our loss L. A very efficient (Kingma and Welling, 2014b; Rezende et al., 2014) way to achieve6 this is to perform the reparametrization in Eq. 13.3 and the corresponding change of variable in the integral of Eq. 13.4, integrating over η rather than h: Z ∂ L(f (θ, η))p(eta)dη. (13.5) ∂θ 6
compared to approaches that do not back-propagate through the generation of h 394
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
We can now more easily enter the derivative in the integral, getting Z ∂L(f(θ, η)) g= p(eta)dη. ∂θ Finally, we get a stochastic gradient estimator gˆ =
∂L(f(θ, η)) ∂θ
where we sampled η ∼ p(η) and E[ˆ g ] = g. This trick was used by Bengio (2013); Bengio et al. (2013a) to train a neural network with stochastic hidden units. It was described at the same time by Kingma (2013), but see the further developments in Kingma and Welling (2014b). It was used to train generative stochastic networks (GSNs) (Bengio et al., 2014a,b), described in Section 20.11, which can be viewed as recurrent networks with noise injected both in input and hidden units (with each time step corresponding to one step of a generative Markov chain). The reparametrization trick was also used to estimate the parameter gradient in variational autoencoders (Kingma and Welling, 2014a; Rezende et al., 2014; Kingma et al., 2014), which are described in Section 20.9.3.
13.6
The Deep Learning Approach to Structured Probabilistic Models
Deep learning practictioners generally use the same basic computational tools as other machine learning practitioners who work with structured probabilistic models. However, in the context of deep learning, we usually make different design decisions about how to combine these tools, resulting in overall algorithms and models that have a very different flavor from more traditional graphical models. The most striking difference between the deep learning style of graphical model design and the traditional style of graphical model design is that the deep learning style heavily emphasizes the use of latent variables. Deep learning models typically have more latent variables than observed variables. Moreover, the practitioner typically does not intend for the latent variables to take on any specific semantics ahead of time— the training algorithm is free to invent the concepts it needs to model a particular dataset. The latent variables are usually not very easy for a human to interpret after the fact, though visualization techniques may allow some rough characterization of what they represent. Complicated non-linear interactions between variables are accomplished via indirect connections that flow through multiple latent variables. By contrast, traditional graphical models usually contain variables that are at least occasionally observed, even if many of the 395
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
variables are missing at random from some training examples. Complicated nonlinear interactions between variables are modeled by using higher-order terms, with structure learning algorithms used to prune connections and control model capacity. When latent variables are used, they are often designed with some specific semantics in mind—the topic of a document, the intelligence of a student, the disease causing a patient’s symptoms, etc. These models are often much more interpretable by human practitioners and often have more theoretical guarantees, yet are less able to scale to complex problems and are not reuseable in as many different contexts as deep models. Another obvious difference is the kind of graph structure typically used in the deep learning approach. This is tightly linked with the choice of inference algorithm. Traditional approaches to graphical models typically aim to maintain the tractability of exact inference. When this constraint is too limiting, a popular exact inference algorithm is loopy belief propagation. Both of these approaches often work well with very sparsely connected graphs. By comparison, very few interesting deep models admit exact inference, and loopy belief propagation is almost never used for deep learning. Most deep models are designed to make Gibbs sampling or variational inference algorithms, rather than loopy belief propagation, efficient. Another consideration is that deep learning models contain a very large number of latent variables, making efficient numerical code essential. As a result of these design constraints, most deep learning models are organized into regular repeating patterns of units grouped into layers, but neighboring layers may be fully connected to each other. When sparse connections are used, they usually follow a regular pattern, such as the block connections used in convolutional models. Finally, the deep learning approach to graphical modeling is characterized by a marked tolerance of the unknown. Rather than simplifying the model until all quantities we might want can be computed exactly, we increase the power of the model until it is just barely possible to train or use. We often use models whose marginal distributions cannot be computed, and are satisfied simply to draw approximate samples from these models. We often train models with an intractable objective function that we cannot even approximate in a reasonable amount of time, but we are still able to approximately train the model if we can efficiently obtain an estimate of the gradient of such a function. The deep learning approach is often to figure out what the minimum amount of information we absolutely need is, and then to figure out how to get a reasonable approximation of that information as quickly as possible.
396
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
h1
h2
v1
h3
v2
h4
v3
Figure 13.12: An example RBM drawn as a Markov network
13.6.1
Example: The Restricted Boltzmann Machine
TODO: rework this section. Add pointer to Chapter 20.2. TODO what do we want to exemplify here? The restricted Boltzmann machine (RBM) (Smolensky, 1986) or harmonium is an example of a model that TODO what do we want to exemplify here? It is an energy-based model with binary visible and hidden units. Its energy function is E(v, h) = −b >v − c >h − v > W h where b, c, and W are unconstrained, real-valued, learnable parameters. The model is depicted graphically in Fig. 13.12. As this figure makes clear, an important aspect of this model is that there are no direct interactions between any two visible units or between any two hidden units (hence the “restricted,” a general Boltzmann machine may have arbitrary connections). The restrictions on the RBM structure yield the nice properties p(h | v) = Π ip(h i | v) and p(v | h) = Πi p(vi | h). The individual conditionals are simple to compute as well, for example p(hi = 1 | v) = σ v >W :,i + bi .
Together these properties allow for efficient block Gibbs sampling, alternating between sampling all of h simultaneously and sampling all of v simultaneously.
397
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
Since the energy function itself is just a linear function of the parameters, it is easy to take the needed derivatives. For example, ∂ E E(v, h) = −v ihj . ∂W i,j v,h These two properties–efficient Gibbs sampling and efficient derivatives– make it possible to train the RBM with stochastic approximations to ∇ θ log Z.
13.6.2
The Computational Challenge with High-Dimensional Distributions
TODO: this whole section should probably just be cut, IG thinks YB has written the same thing in 2-3 other places (ml.tex for sure, and maybe also manifolds.tex and prob.tex, possibly others IG hasn’t read yet) YB doesn’t seem to have read the intro part of this chapter which discusses these things in more detail, double check to make sure there’s not anything left out above If this section is kept, it needs cleanup, i.e. a instead A, etc. If this section is cut, need to search for refs to it and move them to one of the other versions of it High-dimensional random variables actually bring two challenges: a statistical challenge and a computational challenge. The statistical challenge was introduced in Section 5.13 and regards generalization: the number of configurations we may want to distinguish can grow exponentially with the number of dimensions of interest, and this quickly becomes much larger than the number of examples one can possibly have (or use with bounded computational resources). The computational challenge associated with high-dimensional distributions arises because many algorithms for learning or using a trained model (especially those based on estimating an explicit probability function) involve intractable computations that grow exponentially with the number of dimensions. With probabilistic models, this computational challenge arises because of intractable sums (summing over an exponential number of configurations) or intractable maximizations (finding the best out of an intractable number of configurations), discussed mostly in the third part of this book. • Intractable inference: inference is discussed mostly in Chapter 19. It regards the question of guessing the probable values of some variables A, given other variables B, with respect to a model that captures the joint distribution between A, B and C. In order to even compute such conditional probabilities one needs to sum over the values of the variables C, as well as compute a normalization constant which sums over the values of A and C. 398
CHAPTER 13. STRUCTURED PROBABILISTIC MODELS FOR DEEP LEARNING
• Intractable normalization constants (the partition function): the partition function is discussed mostly in Chapter 18. Normalizing constants of probability functions come up in inference (above) as well as in learning. Many probabilistic models involve such a constant. Unfortunately, the parameters (which we want to tune) influence that constant, and computing the gradient of the partition function with respect to the parameters is generally as intractable as computing the partition function itself. MonteCarlo Markov chain (MCMC) methods (Chapter 14) are often used to deal with the partition function (computing it or its gradient) but they may also suffer from the curse of dimensionality, when the number of modes of the distribution of interest is very large, and these modes are well separated (Section 14.2). One way to confront these intractable computations is to approximate them, and many approaches have been proposed, discussed in the chapters listed above. Another interesting way would be to avoid these intractable computations altogether by design, and methods that do not require such computations are thus very appealing. Several generative models based on auto-encoders have been proposed in recent years, with that motivation, and are discussed at the end of Chapter 20.
399
Chapter 14
Monte Carlo Methods TODO plan organization of chapter (spun off from graphical models chapter)
14.1
Markov Chain Monte Carlo Methods
Drawing a sample x from the probability distribution p(x) defined by a structured model is an important operation. The following techniques are described in (Koller and Friedman, 2009). Sampling from an energy-based model is not straightforward. Suppose we have an EBM defining a distribution p(a, b). In order to sample a, we must draw it from p(a | b), and in order to sample b, we must draw it from p(b | a). It seems to be an intractable chicken-and-egg problem. Directed models avoid this because their G is directed and acyclical. In ancestral sampling one simply samples each of the variables in topological order, conditioning on each variable’s parents, which are guaranteed to have already been sampled. This defines an efficient, single-pass method of obtaining a sample. In an EBM, it turns out that we can get around this chicken and egg problem by sampling using a Markov chain. A Markov chain is defined by a state x and a transition distribution T (x0 | x). Running the Markov chain means repeatedly updating the state x to a value x0 sampled from T (x0 | x). Under certain distributions, a Markov chain is eventually guaranteed to draw x from an equilibrium distribution π(x0 ), defined by the condition X ∀x0 , π(x0 ) = T (rvx0 | x)π(x). x
TODO– this vector / matrix view needs a whole lot more exposition only literally a vector / matrix when the state is discrete unpack into multiple sentences, 400
CHAPTER 14. MONTE CARLO METHODS
the parenthetical is hard to parse is the term “stochastic matrix” defined anywhere? make sure it’s in the index at least whoever finishes writing this section should also finish making the math notation consistent terms in this section need to be in the index We can think of π as a vector (with the probability for each possible value x in the element indexed by x, π(x)) and T as a corresponding stochastic matrix (with row index x0 and column index x), i.e., with non-negative entries that sum to 1 over elements of a column. Then, the above equation becomes Tπ = π an eigenvector equation that says that π is the eigenvector of T with eigenvalue 1. It can be shown (Perron-Frobenius theorem) that this is the largest possible eigenvalue, and the only one with value 1 under mild conditions (for example T (x0 | x) > 0). We can also see this equation as a fixed point equation for the update of the distribution associated with each step of the Markov chain. If we start a chain by picking x0 ∼ p 0, then we get a distribution p1 = T p0 after one step, and pt = T pt−1 = T tp0 after t steps. If this recursion converges (the chain has a so-called stationary distribution), then it converges to a fixed point which is precisely pt = π for t → ∞, and the dynamical systems view meets and agrees with the eigenvector view. This condition guarantees that repeated applications of the transition sampling procedure don’t change the distribution over the state of the Markov chain. Running the Markov chain until it reaches its equilibrium distribution is called “burning in” the Markov chain. Unfortunately, there is no theory to predict how many steps the Markov chain must run before reaching its equilibrium distribution1 , nor any way to tell for sure that this event has happened. Also, even though successive samples come from the same distribution, they are highly correlated with each other, so to obtain multiple samples one should run the Markov chain for many steps between collecting each sample. Markov chains tend to get stuck in a single mode of π(x) for several steps. The speed with which a Markov chain moves from mode to mode is called its mixing rate. Since burning in a Markov chain and getting it to mix well may take several sampling steps, sampling correctly from an EBM is still a somewhat costly procedure. TODO: mention Metropolis-Hastings Of course, all of this depends on ensuring π(x) = p(x) . Fortunately, this is easy so long as p(x) is defined by an EBM. The simplest method is to use Gibbs sampling, in which sampling from T (x0 | x) is accomplished by selecting 1
although in principle the ratio of the two leading eigenvalues of the transition operator gives us some clue, and the largest eigenvalue is 1. 401
CHAPTER 14. MONTE CARLO METHODS
Figure 14.1: Paths followed by Gibbs sampling for three distributions, with the Markov chain initialized at the mode in both cases. Left) A multivariate normal distribution with two independent variables. Gibbs sampling mixes well because the variables are independent. Center) A multivariate normal distribution with highly correlated variables. The correlation between variables makes it difficult for the Markov chain to mix. Because each variable must be updated conditioned on the other, the correlation reduces the rate at which the Markov chain can move away from the starting point. Right) A mixture of Gaussians with widely separated modes that are not axis-aligned. Gibbs sampling mixes very slowly because it is difficult to change modes while altering only one variable at a time.
one variable xi and sampling it from p conditioned on its neighbors in G. It is also possible to sample several variables at the same time so long as they are conditionally independent given all of their neighbors. TODO: discussion of mixing example with 2 binary variables that prefer to both have the same state IG’s graphic from lecture on adversarial nets TODO: refer to this figure in the text: TODO: refer to this figure in the text
14.1.1
Markov Chain Theory
TODO State Perron’s theorem DEFINE detailed balance
14.1.2
Importance Sampling
TODO write this section
14.2
The Difficulty of Mixing Between Well-Separated Modes
402
CHAPTER 14. MONTE CARLO METHODS
Figure 14.2: An illustration of the slow mixing problem in deep probabilistic models. Each panel should be read left to right, top to bottom. Left) Consecutive samples from Gibbs sampling applied to a deep Boltzmann machine trained on the MNIST dataset. Consecutive samples are similar to each other. Because the Gibbs sampling is performed in a deep graphical model, this similarity is based more on semantic rather than raw visual features, but it is still difficult for the Gibbs chain to transition from one mode of the distribution to another, for example by changing the digit identity. Right) Consecutive ancestral samples from a generative adversarial network. Because ancestral sampling generates each sample independently from the others, there is no mixing problem.
403
Chapter 15
Linear Factor Models and Auto-Encoders Linear factor models are generative unsupervised learning models in which we imagine that some unobserved factors h explain the observed variables x through a linear transformation. Auto-encoders are unsupervised learning methods that learn a representation of the data, typically obtained by a non-linear parametric transformation of the data, i.e., from x to h, typically a feedforward neural network, but not necessarily. They also learn a transformation going backwards from the representation to the data, from h to x, like the linear factor models. Linear factor models therefore only specify a parametric decoder, whereas autoencoder also specify a parametric encoder. Some linear factor models, like PCA, actually correspond to an auto-encoder (a linear one), but for others the encoder is implicitly defined via an inference mechanism that searches for an h that could have generated the observed x. The idea of auto-encoders has been part of the historical landscape of neural networks for decades (LeCun, 1987; Bourlard and Kamp, 1988; Hinton and Zemel, 1994) but has really picked up speed in recent years. They remained somewhat marginal for many years, in part due to what was an incomplete understanding of the mathematical interpretation and geometrical underpinnings of auto-encoders, which are developed further in Chapters 17 and 20.11. An auto-encoder is simply a neural network that tries to copy its input to its output. The architecture of an auto-encoder is typically decomposed KEY into the following parts, illustrated in Figure 15.1: IDEA • an input, x • an encoder function f • a “code” or internal representation h = f (x) 404
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
reconstruc,on!r!
Decoder.g!
code!h!
Encoder.f!
input!x! Figure 15.1: General schema of an auto-encoder, mapping an input x to an output (called reconstruction) r through an internal representation or code h. The auto-encoder has two components: the encoder f (mapping x to h) and the decoder g (mapping h to r).
• a decoder function g • an output, also called “reconstruction” r = g(h) = g(f (x)) • a loss function L computing a scalar L(r, x) measuring how good of a reconstruction r is of the given input x. The objective is to minimize the expected value of L over the training set of examples {x}.
15.1
Regularized Auto-Encoders
Predicting the input may sound useless: what could prevent the auto-encoder from simply copying its input into its output? In the 20th century, this was achieved by constraining the architecture of the auto-encoder to avoid this, by forcing the dimension of the code h to be smaller than the dimension of the input x. Figure 15.2 illustrates the two typical cases of auto-encoders: undercomplete vs overcomplete, i.e., with the dimension of the representation h respectively smaller vs larger than the input x. Whereas early work with auto-encoders, just like PCA, uses the undercompleteness – i.e. a bottleneck in the sequence of layers – to avoid learning the identity function, more recent work allows overcomplete 405
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
reconstruc4on!r!
Decoder*
Decoder*
Code*bo,leneck!h:! undercomplete* representa4on*
Code!h:! overcomplete* representa4on*
Encoder*
Encoder*
input!x! Figure 15.2: Left: undercomplete representation (dimension of code h is less than dimension of input x). Right: overcomplete representation. Overcomplete auto-encoders require some other form of regularization (instead of the constraint on the dimension of h) to avoid the trivial solution where r = x for all x.
representations. What we have learned in recent years is that it is possible to make the auto-encoder meaningfully capture the structure of the input distribution even if the representation is overcomplete, with other forms of constraint or regularization. In fact, once you realize that auto-encoders can capture the input distribution (indirectly, not as a an explicit probability function), you also realize that it should need more capacity as one increases the complexity of the distribution to be captured (and the amount of data available): it should not be limited by the input dimension. This is a problem in particular with the shallow auto-encoders, which have a single hidden layer (for the code). Indeed, that hidden layer size controls both the dimensionality reduction constraint (the code size at the bottleneck) and the capacity (which allows to learn a more complex distribution). Besides the bottleneck constraint, alternative constraints or regularization methods have been explored and can guarantee that the auto-encoder does something useful and not just learn some trivial identity-like function: • Sparsity of the representation or of its derivative: even if the intermediate representation has a very high dimensionality, the effective local dimensionality (number of degrees of freedom that capture a coordinate sys406
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
tem among the probable x’s) could be much smaller if most of the elements i of h are zero (or any other constant, such that || ∂h ∂x || is close to zero). When i || ∂h || is close to zero, hi does not participate in encoding local changes in ∂x x. There is a geometrical interpretation of this situation in terms of manifold learning that is discussed in more depth in Chapter 17. The discussion in Chapter 16 also explains how an auto-encoder naturally tends towards learning a coordinate system for the actual factors of variation in the data. At least four types of “auto-encoders” clearly fall in this category of sparse representation: – Sparse coding (Olshausen and Field, 1996) has been heavily studied as an unsupervised feature learning and feature inference mechanism. It is a linear factor model rather than an auto-encoder, because it has no explicit parametric encoder, and instead uses an iterative inference instead to compute the code. Sparse coding looks for representations that are both sparse and explain the input through the decoder. Instead of the code being a parametric function of the input, it is instead considered like free variable that is obtained through an optimization, i.e., a particular form of inference: h∗ = f (x) = arg min L(g(h), x)) + λΩ(h)
(15.1)
h
where L is the reconstruction loss, f the (non-parametric) encoder, g the (parametric) decoder, Ω(h) is a sparsity regularizer, and in practice the minimization can be approximate. Sparse coding has a manifold or geometric interpretation that is discussed in Section 15.8. It also has an interpretation as a directed graphical model, described in more details in Section 19.3. To achieve sparsity, the objective function to optimize includes a term that is minimized when the representation P has many zero or near-zero values, such as the L1 penalty |h| 1 = i |hi |.
– An interesting variation of sparse coding combines the freedom to choose the representation through optimization and a parametric encoder. It is called predictive sparse decomposition (PSD) (Kavukcuoglu et al., 2008a) and is briefly described in Section 15.8.2. – At the other end of the spectrum are simply sparse auto-encoders, which combine with the standard auto-encoder schema a sparsity penalty which encourages the output of the encoder to be sparse. These are described in Section 15.8.1. Besides the L1 penalty, other sparsity penalties that have been explored include the Student-t penalty (Olshausen and Field, 1996; Bergstra, 2011), TODO: should the t be in 407
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
math mode, perhaps? X
log(1 + α2h2i )
i
(i.e. where αhi has a Student-t prior density) and the KL-divergence penalty (Lee et al., 2008; Goodfellow et al., 2009; Larochelle and Bengio, 2008a) X (t log h i + (1 − t) log(1 − hi )), − i
with a target sparsity level t, for h i ∈ (0, 1), e.g. through a sigmoid non-linearity.
– Contractive autoencoders (Rifai et al., 2011b), covered in Sec2 tion 15.10, explicitly penalize || ∂h ∂x ||F , i.e., the sum of the squared norm ∂h i(x) of the vectors ∂x (each indicating how much each hidden unit hi responds to changes in x and what direction of change in x that unit is most sensitive to, around a particular x). With such a regularization penalty, the auto-encoder is called contractive 1 because the mapping from input x to representation h is encouraged to be contractive, i.e., to have small derivatives in all directions. Note that a sparsity regularization indirectly leads to a contractive mapping as well, when the non-linearity used happens to have a zero derivative at h i = 0 (which is the case for the sigmoid non-linearity). • Robustness to injected noise or missing information: if noise is injected in inputs or hidden units, or if some inputs are missing, while the neural network is asked to reconstruct the clean and complete input, then it cannot simply learn the identity function. It has to capture the structure of the data distribution in order to optimally perform this reconstruction. Such auto-encoders are called denoising auto-encoders and are discussed in more detail in Section 15.9.
15.2
Denoising Auto-encoders
There is a tight connection between the denoising auto-encoders and the contractive auto-encoders: it can be shown (Alain and Bengio, 2013) that in the limit of small Gaussian injected input noise, the denoising reconstruction error is equivalent to a contractive penalty on the reconstruction function that maps x to r = g(f (x)). In other words, since both x and 1
A function f(x) is contractive if ||f(x)−f(y)|| < ||x−y|| for nearby x and y, or equivalently if its derivative ||f 0 (x)|| < 1. 408
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
x + (where is some small noise vector) must yield the same target output x, the reconstruction function is encouraged to be insensitive to changes in all directions . The only thing that prevents reconstruction r from simply being a constant (completely insensitive to the input x), is that one also has to reconstruct correctly for different training examples x. However, the auto-encoder can learn to be approximately constant around training examples x while producing a different answer for different training examples. As discussed in Section 17.4, if the examples are near a low-dimensional manifold, this encourages the representation to vary only on the manifold and be locally constant in directions orthogonal to the manifold, i.e., the representation locally captures a (not necessarily Euclidean, not necessarily orthogonal) coordinate system for the manifold. In addition to the denoising auto-encoder, the variational auto-encoder (Section 20.9.3) and the generative stochastic networks (Section 20.11) also involve the injection of noise, but typically in the representation-space itself, thus introducing the notion of h as a latent variable. • Pressure of a Prior on the Representation: an interesting way to generalize the notion of regularization applied to the representation is to introduce in the cost function for the auto-encoder a log-prior term − log P (h) which captures the assumption that we would like to find a representation that has a simple distribution (if P (h) has a simple form, such as a factorized distribution2), or at least one that is simpler than the original data distribution. Among all the encoding functions f , we would like to pick one that 1. can be inverted (easily), and this is achieved by minimizing some reconstruction loss, and 2. yields representations h whose distribution is “simpler”, i.e., can be captured with less capacity than the original training distribution itself. The sparse variants described above clearly fall in that framework. The variational auto-encoder (Section 20.9.3) provides a clean mathematical framework for justifying the above pressure of a top-level prior when the objective is to model the data generating distribution. From the point of view of regularization (Chapter 7), adding the − log P (h) term to the objective function (e.g. for encouraging sparsity) or adding a contractive penalty do not fit the traditional view of a prior on the parameters. Instead, 2
all the sparse priors we have described correspond to a factorized distribution 409
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
the prior on the latent variables acts like a data-dependent prior, in the sense that it depends on the particular values h that are going to be sampled (usually from a posterior or an encoder), based on the input example x. Of course, indirectly, this is also a regularization on the parameters, but one that depends on the particular data distribution.
15.3
Representational Power, Layer Size and Depth
Nothing in the above description of auto-encoders restricts the encoder or decoder to be shallow, but in the literature on the subject, most trained auto-encoders have had a single hidden layer which is also the representation layer or code3 For one, we know by the usual universal approximator abilities of single hidden-layer neural networks that a sufficiently large hidden layer can represent any function with a given accuracy. This observation justifies overcomplete autoencoders: in order to represent a rich enough distribution, one probably needs many hidden units in the intermediate representation layer. We also know that Principal Components Analysis (PCA) corresponds to an undercomplete autoencoder with no intermediate non-linearity, and that PCA can only capture a set of directions of variation that are the same everywhere in space. This notion is discussed in more details in Chapter 17 in the context of manifold learning. For two, it has also been reported many times that training a deep neural network, and in particular a deep auto-encoder (i.e. with a deep encoder and a deep decoder) is more difficult than training a shallow one. This was actually a motivation for the initial work on the greedy layerwise unsupervised pre-training procedure, described below in Section 16.1, by which we only need to train a series of shallow auto-encoders in order to initialize a deep auto-encoder. It was shown early on (Hinton and Salakhutdinov, 2006) that, if trained properly, such deep auto-encoders could yield much better compression than corresponding shallow or linear auto-encoders (which are basically doing the same as PCA, see Section 15.6 below). As discussed in Section 16.7, deeper architectures can be in some cases exponentially more efficient (both in terms of computation and statistically) than shallow ones. However, because we can usefully pre-train a deep net by training and stacking shallow ones, it makes it interesting to consider single-layer (or at least shallow and easy to train) auto-encoders, as has been done in most of the literature discussed in this chapter. 3
as argued in this book, this is probably not a good choice, and we would like to independently control the constraints on the representation, e.g. dimension and sparsity of the code, and the capacity of the encoder.
410
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
15.4
Reconstruction Distribution
The above “parts” (encoder function f , decoder function g, reconstruction loss L) make sense when the loss L is simply the squared reconstruction error, but there are many cases where this is not appropriate, e.g., when x is a vector of discrete variables or when P (x | h) is not well approximated by a Gaussian distribution 4 . Just like in the case of other types of neural networks (starting with the feedforward neural networks, Section 6.3.2), it is convenient to define the loss L as a negative log-likelihood over some target random variables. This probabilistic interpretation is particularly important for the discussion in Sections 20.9.3, 20.10 and 20.11 about generative extensions of auto-encoders and stochastic recurrent networks, where the output of the auto-encoder is interpreted as a probability distribution P (x | h), for reconstructing x, given hidden units h. This distribution captures not just the expected reconstruction but also the uncertainty about the original x (which gave rise to h, either deterministically or stochastically, given h). In the simplest and most ordinary cases, this distribution factorizes, i.e., Q P (x | h) = i P (xi | h). This covers the usual cases of x i | h being Gaussian (for unbounded real values) and xi |h having a Bernoulli distribution (for binary values xi ), but one can readily generalize this to other distributions, such as mixtures (see Sections 3.10.5 and 6.3.2). Thus we can generalize the notion of decoding function g(h) to decoding distribution P (x | h). Similarly, we can generalize the notion of encoding function f (x) to encoding distribution Q(h | x), as illustrated in Figure 15.3. We use this to capture the fact that noise is injected at the level of the representation h, now considered like a latent variable. This generalization is crucial in the development of the variational auto-encoder (Section 20.9.3) and the generalized stochastic networks (Section 20.11). We also find a stochastic encoder and a stochastic decoder in the RBM, described in Section 20.2. In that case, the encoding distribution Q(h | x) and P (x | h) “match”, in the sense that Q(h | x) = P (h | x), i.e., there is a unique joint distribution which has both Q(h | x) and P (x | h) as conditionals. This is not true in general for two independently parametrized conditionals like Q(h | x) and P (x | h), although the work on generative stochastic networks (Alain et al., 2015) shows that learning will tend to make them compatible asymptotically (with enough capacity and examples). 4
See the link between squared error and normal density in Sections 5.8 and 6.3.2
411
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
h Q(h|x)
P (x|h)
x Figure 15.3: Basic scheme of a stochastic auto-encoder, in which both the encoder and the decoder are not simple functions but instead involve some noise injection, meaning that their output can be seen as sampled from a distribution, Q(h | x) for the encoder and P (x | h) for the decoder. RBMs are a special case where P = Q (in the sense of a unique joint corresponding to both conditinals) but in general these two distributions are not necessarily conditional distributions compatible with a unique joint distribution P (x, h).
15.5
Linear Factor Models
Now that we have introduced the notion of a probabilistic decoder, let us focus on a very special case where the latent variable h generates x via a linear transformation plus noise, i.e., classical linear factor models, which do not necessarily have a corresponding parametric encoder. The idea of discovering explanatory factors that have a simple joint distribution among themselves is old, e.g., see Factor Analysis (see below), and has been explored first in the context where the relationship between factors and data is linear, i.e., we assume that the data was generated as follows. First, sample the real-valued factors, h ∼ P (h), (15.2) and then sample the real-valued observable variables given the factors: x = W h + b + noise
(15.3)
where the noise is typically Gaussian and diagonal (independent across dimensions). This is illustrated in Figure 15.4.
412
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
h ∼ P (h) P (x|h)
x = W h + b + noise Figure 15.4: Basic scheme of a linear factors model, in which we assume that an observed data vector x is obtained by a linear combination of latent factors h, plus some noise. Different models, such as probabilistic PCA, factor analysis or ICA, make different choices about the form of the noise and of the prior P (h).
15.6
Probabilistic PCA and Factor Analysis
Probabilistic PCA (Principal Components Analysis), factor analysis and other linear factor models are special cases of the above equations (15.2 and 15.3) and only differ in the choices made for the prior (over latent, not parameters) and noise distributions. In factor analysis (Bartholomew, 1987; Basilevsky, 1994), the latent variable prior is just the unit variance Gaussian h ∼ N (0, I ) while the observed variables x i are assumed to be conditionally independent, given h, i.e., the noise is assumed to be coming from a diagonal covariance Gaussian distribution, with covariance matrix ψ = diag(σ 2 ), with σ2 = (σ 21, σ22 , . . .) a vector of per-variable variances. The role of the latent variables is thus to capture the dependencies between the different observed variables x i. Indeed, it can easily be shown that x is just a Gaussian-distribution (multivariate normal) random variable, with x ∼ N (b, W W > + ψ) where we see that the weights W induce a dependency between two variables xi and xj through a kind of auto-encoder path, whereby xi influences ˆ hk = W kx ˆ via w ki (for every k) and hk influences x j via w kj . In order to cast PCA in a probabilistic framework, we can make a slight modification to the factor analysis model, making the conditional variances σ i 413
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
equal to each other. In that case the covariance of x is just W W > + σ2 I, where σ2 is now a scalar, i.e., x ∼ N (b, W W > + σ 2 I) or equivalently x = W h + b + σz where z ∼ N (0, I) is white noise. Tipping and Bishop (1999) then show an iterative EM algorithm for estimating the parameters W and σ 2. What the probabilistic PCA model is basically saying is that the covariance is mostly captured by the latent variables h, up to some small residual reconstruction error σ 2. As shown by Tipping and Bishop (1999), probabilistic PCA becomes PCA as σ → 0. In that case, the conditional expected value of h given x becomes an orthogonal projection onto the space spanned by the d columns of W , like in PCA. See Section 17.1 for a discussion of the “inference” mechanism associated with PCA (probabilistic or not), i.e., recovering the expected value of the latent factors hi given the observed input x. That section also explains the very insightful geometric and manifold interpretation of PCA. However, as σ → 0, the density model becomes very sharp around these d dimensions spanned the columns of W , as discussed in Section 17.1, which would not make it a very faithful model of the data, in general (not just because the data may live on a higher-dimensional manifold, but more importantly because the real data manifold may not be a flat hyperplane - see Chapter 17 for more).
15.6.1
ICA
TODO: do we really want to put every linear factor model in the auto-encoder chapter? if latent variable models are auto-encoders, what deep probabilistic model would not be an auto-encoder? Independent Component Analysis (ICA) is among the oldest representation learning algorithms (Herault and Ans, 1984; Jutten and Herault, 1991; Comon, 1994; Hyv¨arinen, 1999; Hyv¨arinen et al., 2001). It is an approach to modeling linear factors that seeks non-Gaussian projections of the data. Like probabilistic PCA and factor analysis, it also fits the linear factor model of Eqs. 15.2 and 15.3. What is particular about ICA is that unlike PCA and factor analysis it does not assume that the latent variable prior is Gaussian. It only assumes that it is factorized, i.e., Y P (h) = P (hi ). (15.4) i
Since there is no parametric assumption behind the prior, we are really in front of a so-called semi-parametric model, with parts of the model being parametric (P (x | h)) and parts being non-specified or non-parametric (P (h)). In fact, this 414
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
typically yields to non-Gaussian priors: if the priors were Gaussian, then one could not distinguish between the factors h and a rotation of h. Indeed, note that if h = Uz with U an orthonormal (rotation) square matrix, i.e., z = U > h, then, although h might have a Normal(0, I) distribution, the z also have a unit covariance, i.e., they are uncorrelated: V ar[z] = E[zz>] = E[U >hh> U ] = U > V ar[h]U = U >U = I . In other words, imposing independence among Gaussian factors does not allow one to disentangle them, and we could as well recover any linear rotation of these factors. It means that, given the observed x, even though we might assume the right generative model, PCA cannot recover the original generative factors. However, if we assume that the latent variables are non-Gaussian, then we can recover them, and this is what ICA is trying to achieve. In fact, under these generative model assumptions, the true underlying factors can be recovered (Comon, 1994). In fact, many ICA algorithms are looking for projections of the data s = V x such that they are maximally non-Gaussian. An intuitive explanation for these approaches is that although the true latent variables h may be non-Gaussian, almost any linear combination of them will look more Gaussian, because of the central limit theorem. Since linear combinations of the x i’s are also linear combinations of the hj ’s, to recover the h j ’s we just need to find the linear combinations that are maximally non-Gaussian (while keeping these different projections orthogonal to each other). There is an interesting connection between ICA and sparsity, since the dominant form of non-Gaussianity in real data is due to sparsity, i.e., concentration of probability at or near 0. Non-Gaussian distributions typically have more mass around zero, although you can also get non-Gaussianity by increasing skewness, asymmetry, or kurtosis. Like PCA can be generalized to non-linear auto-encoders described later in this chapter, ICA can be generalized to a non-linear generative model, e.g., x = f (h)+ noise. See Hyv¨arinen and Pajunen (1999) for the initial work on non-linear ICA and its successful use with ensemble learning by Roberts and Everson (2001); Lappalainen et al. (2000).
415
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
15.6.2
Sparse Coding as a Generative Model
One particularly interesting form of non-Gaussianity arises with distributions that are sparse. These typically have not just a peak at 0 but also a fat tail5 . Like the other linear factor models (Eq. 15.3), sparse coding corresponds to a linear factor model, but one with a “sparse” latent variable h, i.e., P (h) puts high probability at or around 0. Unlike with ICA (previous section), the latent variable prior is parametric. For example the factorized Laplace density prior is P (h) =
Y
P (hi ) =
i
Yλ i
2
e−λ|h i|
(15.5)
and the factorized Student-t prior is P (h) =
Y i
P (hi ) ∝
Y i
ν+1 2
1 1+
h 2i ν
.
(15.6)
Both of these densities have a strong preference for near-zero values but, unlike the Gaussian, accomodate large values. In the standard sparse coding models, the reconstruction noise is assumed to be Gaussian, so that the corresponding reconstruction error is the squared error. Regarding sparsity, note that the actual value h i = 0 has zero measure under both densities, meaning that the posterior distribution P (h | x) will not generate values h = 0. However, sparse coding is normally considered under a maximum a posteriori (MAP) inference framework, in which the inferred values of h are those that maximize the posterior, and these tend to often be zero if the prior is sufficiently concentrated around 0. The inferred values are those defined in Eq. 15.1, reproduced here, h = f (x) = arg min L(g(h), x)) + λΩ(h) h
where L(g(h), x) is interpreted as − log P (x | g(h)) and Ω(h) as − log P (h). This MAP inference view of sparse coding and an interesting probabilistic interpretation of sparse coding are further discussed in Section 19.3. To relate the generative model of sparse coding to ICA, note how the prior imposes not just sparsity but also independence of the latent variables hi under P (h), which may help to separate different explanatory factors, unlike PCA, factor analysis or probabilistic PCA, because these rely on a Gaussian prior, which yields a factorized prior under any rotation of the factors, multiplication by an orthonormal matrix, as demonstrated in Section 15.6.1. 5
with probability going to 0 as the values increase in magnitude at a rate that is slower than the Gaussian, i.e., less than quadratic in the log-domain. 416
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
See Section 17.2 about the manifold interpretation of sparse coding. TODO: relate to and point to Spike-and-slab sparse coding (Goodfellow et al., 2012) (section?)
15.7
Reconstruction Error as Log-Likelihood
Although traditional auto-encoders (like traditional neural networks) were introduced with an associated training loss, just like for neural networks, that training loss can generally be given a probabilistic interpretation as a conditional loglikelihood of the original input x, given the reprensentation h. We have already covered negative log-likelihood as a loss function in general for feedforward neural networks in Section 6.3.2. Like prediction error for regular feedforward neural networks, reconstruction error for auto-encoders does not have to be squared error. When we view the loss as negative log-likelihood, we interpret the reconstruction error as L = − log P (x | h) where h is the representation, which may generally be obtained through an encoder taking x as input.
h = f(x)
f
g
x L = − log P (x|g(f (x))) Figure 15.5: The computational graph of an auto-encoder, which is trained to maximize the probability assigned by the decoder g to the data point x, given the output of the encoder h = f (x). The training objective is thus L = − log P (x | g(f(x))), which ends up being squared reconstruction error if we choose a Gaussian reconstruction distribution with mean g(f (x)), and cross-entropy if we choose a factorized Bernoulli reconstruction distribution with means g(f(x)). 417
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
An advantage of this view is that it immediately tells us what kind of loss function one should use depending on the nature of the input. If the input is realvalued and unbounded, then squared error is a reasonable choice of reconstruction error, and corresponds to P (x | h) being Normal. If the input is a vector of bits, then cross-entropy is a more reasonable choice, and corresponds to P (x | Q h) = i P (x i | h) with xi | h being Bernoulli-distributed. We then view the decoder g(h) as computing the parameters of the reconstruction distribution, i.e., P (x | h) = P (x | g(h)). Another advantage of this view is that we can think about the training of the decoder as estimating the conditional distribution P (x | h), which comes handy in the probabilistic interpretation of denoising auto-encoders, allowing us to talk about the distribution P (x) explicitly or implicitly represented by the auto-encoder (see Sections 15.9, 20.9.3 and 20.10 for more details). In the same spirit, we can rethink the notion of encoder from a simple function to a conditional distribution Q(h | x), with a special case being when Q(h | x) is a Dirac at some particular value. Equivalently, thinking about the encoder as a distribution corresponds to injecting noise inside the auto-encoder. This view is developed further in Sections 20.9.3 and 20.11.
15.8
Sparse Representations
Sparse auto-encoders are auto-encoders which learn a sparse representation, i.e., one whose elements are often either zero or close to zero. Sparse coding was introduced in Section 15.6.2 as a linear factor model in which the prior P (h) on the representation h = f (x) encourages values at or near 0. In Section 15.8.1, we see how ordinary auto-encoders can be prevented from learning a useless identity transformation by using a sparsity penalty rather than a bottleneck. The main difference between a sparse auto-encoder and sparse coding is that sparse coding has no explicit parametric encoder, whereas sparse auto-encoders have one. The “encoder” of sparse coding is the algorithm that performs the approximate inference, i.e., looks for ||x − (b + W h)|| 2 h (x) = arg max log P (h | x) = arg min − log P (h) σ2 h h ∗
(15.7)
where σ2 is a reconstruction variance parameter (which should equal the average squared reconstruction error 6), and P (h) is a “sparse” prior that puts more prob6
but can be lumped into the regularizer λ which controls the strength of the sparsity prior, defined in Eq. 15.8, for example.
418
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
ability mass around h = 0, such as the Laplacian prior, with factorized marginals λ P (hi ) = e λ|hi| 2
(15.8)
or the Student-t prior, with factorized marginals P (hi) ∝
1 (1 +
h2i ν
)
ν+1 2
.
(15.9)
The advantages of such a non-parametric encoder and the sparse coding approach over sparse auto-encoders are that 1. it can in principle minimize the combination of reconstruction error and log-prior better than any parametric encoder, 2. it performs what is called explaining away (see Figure 13.8), i.e., it allows to “choose” some “explanations” (hidden factors) and inhibits the others. The disadvantages are that 1. computing time for encoding the given input x, i.e., performing inference (computing the representation h that goes with the given x) can be substantially larger than with a parametric encoder (because an optimization must be performed for each example x), and 2. the resulting encoder function could be non-smooth and possibly too nonlinear (with two nearby x’s being associated with very different h’s), potentially making it more difficult for the downstream layers to properly generalize. In Section 15.8.2, we describe PSD (Predictive Sparse Decomposition), which combines a non-parametric encoder (as in sparse coding, with the representation obtained via an optimization) and a parametric encoder (like in the sparse autoencoder). Section 15.9 introduces the Denoising Auto-Encoder (DAE), which puts pressure on the representation by requiring it to extract information about the underlying distribution and where it concentrates, so as to be able to denoise a corrupted input. Section 15.10 describes the Contractive Auto-Encoder (CAE), which optimizes an explicit regularization penalty that aims at making the representation as insensitive as possible to the input, while keeping the information sufficient to reconstruct the training examples.
419
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
15.8.1
Sparse Auto-Encoders
A sparse auto-encoder is simply an auto-encoder whose training criterion involves a sparsity penalty Ω(h) in addition to the reconstruction error: L = − log P (x | g(h)) + Ω(h)
(15.10)
where g(h) is the decoder output and typically we have h = f(x), the encoder output. We can think of that penalty Ω(h) simply as a regularizer or as a log-prior on the representations h. For example, the sparsity penalty corresponding to the Laplace prior ( λ2 e−λ|hi| ) is the absolute value sparsity penalty (see also Eq. 15.8 above): X |h i| Ω(h) = λ i
− log P (h) =
X
log
i
λ + λ|h i| = const + Ω(h) 2
(15.11)
where the constant term depends only of λ and not h (which we typically ignore in the training criterion because we consider λ as a hyperparameter rather than a parameter). Similarly (as per Eq. 15.9), the sparsity penalty corresponding to the Student-t prior (Olshausen and Field, 1997) is X ν+1 h 2i Ω(h) = log(1 + ) (15.12) 2 ν i
where ν is considered to be a hyperparameter. The early work on sparse auto-encoders (Ranzato et al., 2007a, 2008) considered various forms of sparsity and proposed a connection between sparsity regularization and the partition function gradient in energy-based models (see Section TODO). The idea is that a regularizer such as sparsity makes it difficult for an auto-encoder to achieve zero reconstruction error everywhere. If we consider reconstruction error as a proxy for energy (unnormalized log-probability of the data), then minimizing the training set reconstruction error forces the energy to be low on training examples, while the regularizer prevents it from being low everywhere. The same role is played by the gradient of the partition function in energy-based models such as the RBM (Section TODO). However, the sparsity penalty of sparse auto-encoders does not need to have a probabilistic interpretation. For example, Goodfellow et al. (2009) successfully used the following sparsity penalty, which does not try to bring hi all the way down to 0, but only towards some low target value such as ρ = 0.05. Ω(h) = i
X
ρ log hi + (1 − ρ) log(1 − h i) 420
(15.13)
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
where 0 < hi < 1, usually with hi = sigmoid(ai ). This is just the cross-entropy between the Bernoulli distributions with probability p = hi and the target Bernoulli distribution with probability p = ρ. One way to achieve actual zeros in h for sparse (and denoising) auto-encoders was introduced in Glorot et al. (2011c). The idea is to use a half-rectifier (a.k.a. simply as “rectifier”) or ReLU (Rectified Linear Unit, introduced in Glorot et al. (2011b) for deep supervised networks and earlier in Nair and Hinton (2010a) in the context of RBMs) as the output non-linearity of the encoder. With a prior that actually pushes the representations to zero (like the absolute value penalty), one can thus indirectly control the average number of zeros in the representation. ReLUs were first successfully used for deep feedforward networks in Glorot et al. (2011a), achieving for the first time the ability to train fairly deep supervised networks without the need for unsupervised pre-training, and this turned out to be an important component in the 2012 object recognition breakthrough with deep convolutional networks (Krizhevsky et al., 2012b). Interestingly, the “regularizer” used in sparse auto-encoders does not conform to the classical interpretation of regularizers as priors on the parameters. That classical interpretation of the regularizer comes from the MAP (Maximum A Posteriori) point estimation (see Section 5.7.1) of parameters associated with the Bayesian view of parameters as random variables and considering the joint distribution of data x and parameters θ (see Section 5.9): arg max P (θ | x) = arg max (log P (x | θ) + log P (θ)) θ
θ
where the first term on the right is the usual data log-likelihood term and the second term, the log-prior over parameters, incorporates the preference over particular values of θ. With regularized auto-encoders such as sparse auto-encoders and contractive auto-encoders, instead, the regularizer corresponds to a log-prior over the representation, or over latent variables. In the case of sparse auto-encoders, predictive sparse decomposition and contractive auto-encoders, the regularizer specifies a preference over functions of the data, rather than over parameters. This makes such a regularizer data-dependent, unlike the classical parameter log-prior. Specifically, in the case of the sparse auto-encoder, it says that we prefer an encoder whose output produces values closer to 0. Indirectly (when we marginalize over the training distribution), this is also indicating a preference over parameters, of course.
15.8.2
Predictive Sparse Decomposition
TODO: we have too many forward refs to this section. There are 150 lines about PSD in this section and at least 20 lines of forward references to this section 421
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
in this chapter, some of which are just 100 lines away. Predictive sparse decomposition (PSD) is a variant that combines sparse coding and a parametric encoder (Kavukcuoglu et al., 2008b), i.e., it has both a parametric encoder and iterative inference. It has been applied to unsupervised feature learning for object recognition in images and video (Kavukcuoglu et al., 2009, 2010b; Jarrett et al., 2009a; Farabet et al., 2011), as well as for audio (Henaff et al., 2011). The representation is considered to be a free variable (possibly a latent variable if we choose a probabilistic interpretation) and the training criterion combines a sparse coding criterion with a term that encourages the optimized sparse representation h (after inference) to be close to the output of the encoder f(x): L = arg min ||x − g(h)||2 + λ|h|1 + γ||h − f(x)|| 2 (15.14) h
where f is the encoder and g is the decoder. Like in sparse coding, for each example x an iterative optimization is performed in order to obtain a representation h. However, because the iterations can be initialized from the output of the encoder, i.e., with h = f (x), only a few steps (e.g. 10) are necessary to obtain good results. Simple gradient descent on h has been used by the authors. After h is settled, both g and f are updated towards minimizing the above criterion. The first two terms are the same as in L1 sparse coding while the third one encourages f to predict the outcome of the sparse coding optimization, making it a better choice for the initialization of the iterative optimization. Hence f can be used as a parametric approximation to the non-parametric encoder implicitly defined by sparse coding. It is one of the first instances of learned approximate inference (see also Sec. 19.6). Note that this is different from separately doing sparse coding (i.e., training g) and then training an approximate inference mechanism f , since both the encoder and decoder are trained together to be “compatible” with each other. Hence the decoder will be learned in such a way that inference will tend to find solutions that can be well approximated by the approximate inference. TODO: this is probably too much forward reference, when we bring these things in we can remind people that they resemble PSD, but it doesn’t really help the reader to say that the thing we are describing now is similar to things they haven’t seen yet A similar example is the variational auto-encoder, in which the encoder acts as approximate inference for the decoder, and both are trained jointly (Section 20.9.3). See also Section 20.9.4 for a probabilistic interpretation of PSD in terms of a variational lower bound on the log-likelihood. In practical applications of PSD, the iterative optimization is only used during training, and f is used to compute the learned features. It makes computation fast at recognition time and also makes it easy to use the trained features f as initialization (unsupervised pre-training) for the lower layers of a deep net. Like other unsupervised feature learning schemes, PSD can be stacked greedily, e.g., 422
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
training a second PSD on top of the features extracted by the first one, etc.
15.9
Denoising Auto-Encoders
The Denoising Auto-Encoder (DAE) was first proposed (Vincent et al., 2008, 2010) as a means of forcing an auto-encoder to learn to capture the data distribution without an explicit constraint on either the dimension or the sparsity of the learned representation. It was motivated by the idea that in order to fully capture a complex distribution, an auto-encoder needs to have at least as many hidden units as needed by the complexity of that distribution. Hence its dimensionality should not be restricted to the input dimension. The principle of the denoising auto-encoder is deceptively simple and illustrated in Figure 15.6: the encoder sees as input a corrupted version of the input, but the decoder tries to reconstruct the clean uncorrupted input.
h = f(˜ x)
g
f x ˜ C(˜ x|x)
L = − log P (x|g(f(˜ x)))
x
Figure 15.6: The computational graph of a denoising auto-encoder, which is trained to reconstruct the clean data point x from its corrupted version ˜ x, i.e., to minimize the loss ˜ ˜ L = − log P (x | g(f ( x))), where x is a corrupted version of the data example x, obtained through a given corruption process C(˜ x | x).
Mathematically, and following the notations used in this chapter, this can be formalized as follows. We introduce a corruption process C(˜ x | x) which represents a conditional distribution over corrupted samples ˜ x, given a data sample x. The auto-encoder then learns a reconstruction distribution P (x | ˜ x) estimated ˜ as follows: from training pairs (x, x), 1. Sample a training example x = x from the data generating distribution (the 423
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
training set). 2. Sample a corrupted version ˜ x = x˜ from the conditional distribution C(˜ x| x = x). 3. Use (x, x) ˜ as a training example for estimating the auto-encoder reconstrucx) tion distribution P (x | ˜ x) = P (x | g(h)) with h the output of encoder f (˜ and g(h) the output of the decoder. Typically we can simply perform gradient-based approximate minimization (such as minibatch gradient descent) on the negative log-likelihood − log P (x | h), i.e., the denoising reconstruction error, using back-propagation to compute gradients, just like for regular feedforward neural networks (the only difference being the corruption of the input and the choice of target output). We can view this training objective as performing stochastic gradient descent on the denoising reconstruction error, but where the “noise” now has two sources: 1. the choice of training sample x from the data set, and 2. the random corruption applied to x to obtain ˜ x. We can therefore consider that the DAE is performing stochastic gradient descent on the following expectation: −Ex∼Q(x) E˜x∼C(x|x) log P (x | g(f ( x))) ˜ ˜ where Q(x) is the training distribution.
424
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
x ˜ g(f(˜ x)) ≈ E[x|˜ x]
x ˜ C(˜ x|x) x
Figure 15.7: A denoising auto-encoder is trained to reconstruct the clean data point x from it corrupted version ˜x. In the figure, we illustrate the corruption process C(x˜ | x) by a grey circle of equiprobable corruptions, and grey arrow for the corruption process) acting on examples x (red crosses) lying near a low-dimensional manifold near which probability concentrates. When the denoising auto-encoder is trained to minimize the ˜ − x||2 , the reconstruction g(f(x)) ˜ estimates E[x | x], ˜ average of squared errors ||g(f (x)) which approximately points orthogonally towards the manifold, since it estimates the center of mass of the clean points x which could have given rise to ˜ x. The auto-encoder thus learns a vector field g(f(x)) − x (the green arrows) and it turns out that this ∂ log Q(x) vector field estimates the gradient field (up to a multiplicative factor that is the ∂x average root mean square reconstruction error), where Q is the unknown data generating distribution.
15.9.1
Learning a Vector Field that Estimates a Gradient Field
As illustrated in Figure 15.7, a very important property of DAEs is that their training criterion makes the auto-encoder learn a vector field (g(f (x)) − x) that Q(x) estimates the gradient field (or score) ∂ log∂x , as per Eq. 15.15. A first result in this direction was proven by Vincent (2011a), showing that minimizing squared reconstruction error in a denoising auto-encoder with Gaussian noise was related to score matching (Hyv¨arinen, 2005a), making the denoising criterion a regularized form of score matching called denoising score matching (Kingma and LeCun, 2010a). Score matching is an alternative to maximum likelihood and provides a consistent estimator. It is discussed further in Section 18.4. The denoising version 425
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
is discussed in Section 18.5. The connection between denoising auto-encoders and score matching was first made (Vincent, 2011a) in the case where the denoising auto-encoder has a particular parametrization (one hidden layer, sigmoid activation functions on hidden units, linear reconstruction), in which case the denoising criterion actually corresponds to a regularized form of score matching on a Gaussian RBM (with binomial hidden units and Gaussian visible units). The connection between ordinary autoencoders and Gaussian RBMs had previously been made by Bengio and Delalleau (2009), which showed that contrastive divergence training of RBMs was related to an associated auto-encoder gradient, and later by Swersky (2010), which showed that non-denoising reconstruction error corresponded to score matching plus a regularizer. The fact that the denoising criterion yields an estimator of the score for general encoder/decoder parametrizations has been proven (Alain and Bengio, 2012, 2013) in the case where the corruption and the reconstruction distributions are Gaussian (and of course x is continuous-valued), i.e., with the squared error denoising error ˜ − x||2 ||g(f (x)) and corruption = N ( x; C(˜ x = x|x) ˜ ˜ µ = x, Σ = σ2 I) with noise variance σ 2.
426
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
Figure 15.8: Vector field learned by a denoising auto-encoder around a 1-D curved manifold near which the data (orange circles) concentrates in a 2-D space. Each arrow is proportional to the reconstruction minus input vector of the auto-encoder and points towards higher probability according to the implicitly estimated probability distribution. Note that the vector field has zeros at both peaks of the estimated density function (on the data manifolds) and at troughs (local minima) of that density function, e.g., on the curve that separates different arms of the spiral or in the middle of it.
More precisely, the main theorem states that of
∂ log Q(x) , ∂x
g(f(x))−x σ2
is a consistent estimator
where Q(x) is the data generating distribution, g(f (x)) − x ∂ log Q(x) → , σ2 ∂x
(15.15)
so long as f and g have sufficient capacity to represent the true score (and assuming that the expected training criterion can be minimized, as usual when proving consistency associated with a training objective). Note that in general, there is no guarantee that the reconstruction g(f (x)) minus the input x corresponds to the gradient of something (the estimated score should be the gradient of the estimated log-density with respect to the input 427
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
x). That is why the early results (Vincent, 2011a) are specialized to particular parametrizations where g(f (x)) − x is the derivative of something. See a more general treatment by Kamyshanska and Memisevic (2015). Although it was intuitively appealing that in order to denoise correctly one must capture the training distribution, the above consistency result makes it mathematically very clear in what sense the DAE is capturing the input distribution: it is estimating the gradient of its energy function (i.e., of its log-density), i.e., learning to point towards more probable (lower energy) configurations. Figure 15.8 (see details of experiment in Alain and Bengio (2013)) illustrates this. Note how the norm of reconstruction error (i.e. the norm of the vectors shown in the figure) is related to but different from the energy (unnormalized log-density) associated with the estimated model. The energy should be low only where the probability is high. The reconstruction error (norm of the estimated score vector) is low where probability is near a peak of probability (or a trough of energy), but it can also be low at maxima of energy (minima of probability). Section 20.10 continues the discussion of the relationship between denoising auto-encoders and probabilistic modeling by showing how one can generate from the distribution implicitly estimated by a denoising auto-encoder. Whereas (Alain and Bengio, 2013) generalized the score estimation result of Vincent (2011a) to arbitrary parametrizations, the result from Bengio et al. (2013b), discussed in Section 20.10, provides a probabilistic – and in fact generative – interpretation to every denoising auto-encoder.
15.10
Contractive Auto-Encoders
The Contractive Auto-Encoder or CAE (Rifai et al., 2011a,c) introduces an explicit regularizer on the code h = f (x), encouraging the derivatives of f to be as small as possible: ∂f(x) 2 Ω(h) = (15.16) ∂x F which is the squared Frobenius norm (sum of squared elements) of the Jacobian matrix of partial derivatives associated with the encoder function. Whereas the denoising auto-encoder learns to contract the reconstruction function (the composition of the encoder and decoder), the CAE learns to specifically contract the encoder. See Figure 17.13 for a view of how contraction near the data points makes the auto-encoder capture the manifold structure. If it weren’t for the opposing force of reconstruction error, which attempts to make the code h keep all the information necessary to reconstruct training examples, the CAE penalty would yield a code h that is constant and does not 428
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
depend on the input x. The compromise between these two forces yields an auto∂f (x) encoder whose derivatives ∂x are tiny in most directions, except those that are needed to reconstruct training examples, i.e., the directions that are tangent to the manifold near which data concentrate. Indeed, in order to distinguish (and thus, reconstruct correctly) two nearby examples on the manifold, one must assign them a different code, i.e., f (x) must vary as x moves from one to the other, i.e., in the direction of a tangent to the manifold.
Figure 15.9: Average (over test examples) of the singular value spectrum of the Jacobian (x) matrix ∂f for the encoder f learned by a regular auto-encoder (AE) versus a contractive ∂x auto-encoder (CAE). This illustrates how the contractive regularizer yields a smaller set of directions in input space (those corresponding to large singular value of the Jacobian) which provoke a response in the representation h while the representation remains almost insensitive for most directions of change in the input.
What is interesting is that this penalty forces more strongly the representation to be invariant in directions orthogonal to the manifold. This can be seen clearly by comparing the singular value spectrum of the Jacobian ∂f∂x(x) for different autoencoders, as shown in Figure 15.9. We see that the CAE manages to concentrate the sensitivity of the representation in fewer dimensions than a regular (or sparse) auto-encoder. Figure 17.3 illustrates tangent vectors obtained by a CAE on the MNIST digits dataset, showing that the leading tangent vectors correspond to small deformations such as translation. More impressively, Figure 15.10 shows tangent vectors learned on 32×32 color (RGB) CIFAR-10 images by a CAE, 429
CHAPTER 15. LINEAR FACTOR MODELS AND AUTO-ENCODERS
compared to the tangent vectors by a non-distributed representation learner (a mixture of local PCAs).
Figure 15.10: Illustration of tangent vectors (bottom) of the manifold estimated by a contractive auto-encoder (CAE), at some input point (left, CIFAR-10 image of a dog). See also Fig. 17.3. Each image on the right corresponds to a tangent vector, either estimated by a local PCA (equivalent to a Gaussian mixture), top, or by a CAE (bottom). The tangent vectors are estimated by the leading singular vectors of the Jacobian matrix ∂h of the input-to-code mappiing. Although both local PCA and CAE can capture local ∂x tangents that are different in different points, the local PCA does not have enough training data to meaningful capture good tangent directions, whereas the CAE does (because it exploits parameter sharing across different locations that share a subset of active hidden units). The CAE tangent directions typically correspond to moving or changing parts of the object (such as the head or legs), which corresponds to plausible changes in the input image.
One practical issue with the CAE regularization criterion is that although it is cheap to compute in the case of a single hidden layer auto-encoder, it becomes much more expensive in the case of deeper auto-encoders. The strategy followed by Rifai et al. (2011a) is to separately pre-train each single-layer auto-encoder stacked to form a deeper auto-encoder. However, a deeper encoder could be advantageous in spite of the computational overhead, as argued by Schulz and Behnke (2012). Another practical issue is that the contraction penalty on the encoder f could yield useless results if the decoder g would exactly compensate (e.g. by being scaled up by exactly the same amount as f is scaled down). In Rifai et al. (2011a), this is compensated by tying the weights of f and g, both being of the form of an affine transformation followed by a non-linearity (e.g. sigmoid), i.e., the weights of g and the transpose of the weights of f. 430
Chapter 16
Representation Learning What is a good representation? Many answers are possible, and this remains a question to be further explored in future research. What we propose as answer in this book is that in general, a good representation is one that makes further learning tasks easy. In an unsupervised learning setting, this could mean that the joint distribution of the different elements of the representation (e.g., elements of the representation vector h) is one that is easy to model (e.g., in the extreme, these elements are marginally independent of each other). But that would not be enough: a representation that throws away all information (e.g., h = 0 for all inputs x) is very easy to model but is also useless. Hence we want to learn a representation that keeps the information (or at least all the relevant information, in the supervised case) and makes it easy to learn functions of interest from this representation. In Chapter 1, we have introduced the notion of representation, the idea that some representations were more helpful (e.g. to classify objects from images or phonemes from speech) than others. As argued there, this suggests learning representations in order to “select” the best ones in a systematic way, i.e., by optimizing a function that maps raw data to its representation, instead of - or in addition to - handcrafting them. This motivation for learning input features is discussed in Section 6.6, and is one of the major side-effects of training a feedforward deep network (treated in Chapter 6), typically via supervised learning, i.e., when one has access to (input,target) pairs1 , available for some task of interest. In the case of supervised learning of deep nets, we learn a representation with the objective of selecting one that is best suited to the task of predicting targets given inputs. Whereas supervised learning has been the workhorse of recent industrial successes of deep learning, the authors of this book believe that it is likely that a key 1
typically obtained by labeling inputs with some target answer that we wish the computer would produce
431
CHAPTER 16. REPRESENTATION LEARNING
16.1
Greedy Layerwise Unsupervised Pre-Training
Unsupervised learning played a key historical role in the revival of deep neural networks, allowing for the first time to train a deep supervised network. We call this procedure unsupervised pre-training, or more precisely, greedy layer-wise unsupervised pre-training, and it is the topic of this section. This recipe relies on a one-layer representation learning algorithm such as those introduced in this part of the book, i.e., the auto-encoders (Chapter 15) and the RBM (Section 20.2). Each layer is pre-trained by unsupervised learning, taking the output of the previous layer and producing as output a new representation of the data, whose distribution (or its relation to other variables such as categories to predict) is hopefully simpler. Greedy layerwise unsupervised pre-training was introduced in Hinton et al. (2006); Hinton and Salakhutdinov (2006); Bengio et al. (2007a); Ranzato et al. (2007a). These papers are generally credited with founding the renewed interest in learning deep models as it provided a means of initializing subsequent supervised training and often led to notable performance gains when compared to models trained without unsupervised pretraining, at least for the small kinds of datasets (like the 60,000 examples of MNIST) used in these experiments. It is called layerwise because it proceeds one layer at a time, training the k-th layer while keeping the previous ones fixed. It is called unsupervised because each layer is trained with an unsupervised representation learning algorithm. It 432
CHAPTER 16. REPRESENTATION LEARNING
is called greedy because the different layers are not jointly trained with respect to a global training objective, which could make the procedure sub-optimal. In particular, the lower layers (which are first trained) are not adapted after the upper layers are introduced. However it is also called pre-training, because it is supposed to be only a first step before a joint training algorithm is applied to finetune all the layers together with respect to a criterion of interest. In the context of a supervised learning task, it can be viewed as a regularizer (see Chapter 7) and a sophisticated form of parameter initialization. When we refer to pre-training we will be referring to a specific protocol with two main phases of training: the pretraining phase and the fine-tuning phase. No matter what kind of unsupervised learning algorithm or what model type you employ, in the vast majority of cases, the overall training scheme is nearly the same. While the choice of unsupervised learning algorithm will obviously impact the details, in the abstract, most applications of unsupervised pre-training follows this basic protocol. As outlined in Algorithm 16.1, in the pretraining phase, the layers of the model are trained, in order, in an unsupervised way on their input, beginning with the bottom layer, i.e. the one in direct contact with the input data. Next, the second lowest layer is trained taking the activations of the first layer hidden units as input for unsupervised training. Pretraining proceeds in this fashion, from bottom to top, with each layer training on the “output” or activations of the hidden units of the layer below. After the last layer is pretrained, a supervised layer is put on top, and all the layers are jointly trained with respect to the overall supervised training criterion. In other words, the pre-training was only used to initialize a deep supervised neural network (which could be a convolutional neural network (Ranzato et al., 2007a)). This is illustrated in Figure 16.1. However, greedy layerwise unsupervised pre-training can also be used as initialization for other unsupervised learning algorithms, such as deep auto-encoders (Hinton and Salakhutdinov, 2006), deep belief networks (Hinton et al., 2006) (Section 20.4), or deep Boltzmann machines (Salakhutdinov and Hinton, 2009a) (Section 20.5). As discussed in Section 8.6.4, it is also possible to have greedy layerwise supervised pre-training, to help optimize deep supervised networks. This builds on the premise that training a shallow network is easier than training a deep one, which seems to have been validated in several contexts (Erhan et al., 2010).
16.1.1
Why Does Unsupervised Pre-Training Work?
What has been observed on several datasets starting in 2006 (Hinton et al., 2006; Bengio et al., 2007a; Ranzato et al., 2007a) is that greedy layer-wise unsupervised pre-training can yield substantial improvements in test error for classification 433
CHAPTER 16. REPRESENTATION LEARNING
Algorithm 16.1 Greedy layer-wise unsupervised pre-training protocol. Given the following: Unsupervised feature learner L, which takes a training set D of examples and returns an encoder or feature function f = L(D). The raw input data is X, with one row per example and f (X) is the dataset used by the second level unsupervised feature learner. In the case fine-tuning is performed, we use a learner T which takes an initial function f, input examples X (and in the supervised fine-tuning case, associated targets Y ), and returns a tuned function. The number of stages is M . D(0) = X f ← Identity function for k = 1 . . . , M do f (k) = L(D) f ← f (k) ◦ f end for if fine-tuning then f ← T (f, X, Y ) end if Return f
Figure 16.1: Illustration of the greedy layer-wise unsupervised pre-training scheme, in the case of a network with 3 hidden layers. The protocol proceeds in 4 phases (one per hidden layer, plus the final supervised fine-tuning phase), from left to right. For the unsupervised steps, each layer (darker grey) is trained to learn a better representation of the output of the previously trained layer (initially, the raw input). These representations learned by unsupervised learning form the initialization of a deep supervised net, which is then trained (fine-tuned) as usual (last phase, right), with all parameters being free to change (darker grey). 434
CHAPTER 16. REPRESENTATION LEARNING
tasks. Later work suggested that the improvements were less marked (or not even visible) when very large labeled datasets are available, although the boundary between the two behaviors remains to be clarified, i.e., it may not just be an issue of number of labeled examples but also how this relates to the complexity of the function to be learned. A question that thus naturally arises is the following: why and when does unsupervised pre-training work? Although previous studies have mostly focused on the case when the final task is supervised (with supervised fine-tuning), it is also interesting to keep in mind that one gets improvements in terms of both training and test performance in the case of unsupervised fine-tuning, e.g., when training deep auto-encoders (Hinton and Salakhutdinov, 2006). This “why does it work” question is at the center of the paper by Erhan et al. (2010), and their experiments focused on the supervised fine-tuning case. They consider different machine learning hypotheses to explain the results observed, and attempted to confirm those via experiments. We summarize some of this investigation here. First of all, they studied the trajectories of neural networks during supervised fine-tuning, and evaluated how different they were depending on initial conditions, i.e., due to random initialization or due to performing unsupervised pre-training or not. The main result is illustrated and discussed in Figures 16.2 and 16.2. Note that it would not make sense to plot the evolution of parameters of these networks directly, because the same input-to-output function can be represented by different parameter values (e.g., by relabeling the hidden units). Instead, this work plots the trajectories in function space, by considering the output of a network (the class probability predictions) for a given set of test examples as a proxy for the function computed. By concatenating all these outputs (over say 1000 examples) and doing dimensionality reduction on these vectors, we obtain the kinds of plots illustrated in the figure. The main conclusions of these kinds of plots are the following: 1. Each training trajectory goes to a different place, i.e., different trajectories do not converge to the same place. These “places” might be in the vicinity of a local minimum or as we understand it better now (Dauphin et al., 2014) these are more likely to be an “apparent local minimum” in the region of flat derivatives near a saddle point. This suggests that the number of these apparent local minima is huge, and this also is in agreement with theory (Dauphin et al., 2014; Choromanska et al., 2014). 2. Depending on whether we initialize with unsupervised pre-training or not, very different functions (in function space) are obtained, covering regions that do not overlap. Hence there is a qualitative effect due to unsupervised 435
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.2: Illustrations of the trajectories of different neural networks in function space (not parameter space, to avoid the issue of many-to-one mapping from parameter vector to function), with different random initializations and with or without unsupervised pretraining. Each plus or diamond point corresponds to a different neural network, at a particular time during its training trajectory, with the function it computes projected to 2-D by t-SNE (van der Maaten and Hinton, 2008a) (this figure) or by Isomap (Tenenbaum et al., 2000) (Figure 16.3). TODO: should the t be in math mode? Color indicates the number of training epochs. What we see is that no two networks converge to the same function (so a large number of apparent local minima seems to exist), and that networks initialized with pre-training learn very different functions, in a region of function space that does not overlap at all with those learned by networks without pre-training. Such curves were introduced by Erhan et al. (2010) and are reproduced here with permission.
436
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.3: See Figure 16.2’s caption. This figure only differs in the use of Isomap (Tenenbaum et al., 2000) rather than t-SNE (van der Maaten and Hinton, 2008b) for dimensionality reduction. Note that Isomap tries to preserve global relative distances (and hence volumes), whereas t-SNE only cares about preserving local geometry and neighborhood relationships. We see with the Isomap dimensionality reduction that the volume in function space occupied by the networks with pre-training is much smaller (in fact that volume gets reduced rather than increased, during training), suggesting that the set of solutions enjoy smaller variance, which would be consistent with the observed improvements in generalization error. Such curves were introduced by Erhan et al. (2010) and are reproduced here with permission.
437
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.4: Histograms presenting the test errors obtained on MNIST using denoising auto-encoder models trained with or without pre-training (400 different initializations each). Left: 1 hidden layer. Right: 4 hidden layers. We see that the advantage brought by pre-training increases with depth, both in terms of mean error and in terms of the variance of the error (w.r.t. random initialization).
TODO: figure credit saying these came from Erhan 2010.... pre-training. 3. With unsupervised pre-training, the region of space covered by the solutions associated with different initializations shrinks as we consider more training iterations, whereas it grows without unsupervised pre-training. This is only apparent in the visualization of Figure 16.3, which attempts to preserve volume. A larger region is bad for generalization (because not all these functions can be the right one together), yielding higher variance. This is consistent with the better generalization observed with unsupervised pretraining. Another interesting effect is that the advantage of pre-training seems to increase with depth, as illustrated in Figure 16.4, with both the mean and the variance of the error decreasing more for deeper networks. An important question is whether the advantage brought by pre-training can be seen as a form of regularizer (which could help test error but hurt training error) or simply a way to find a better minimizer of training error (e.g., by initializing near a better minimum of training error). The experiments suggest pre-training actually acts as a regularizer, i.e., hurting training error at least in some cases (with deeper networks). So if it also helps optimization, it is only because it initializes closer to a good solution from the point of view of generalization, not necessarily from the point of view of the training set. 438
CHAPTER 16. REPRESENTATION LEARNING
How could unsupervised pre-training act as regularizer? Simply by imposing an extra constraint: the learned representations should not only be consistent with better predicting outputs y but they should also be consistent with better capturing the variations in the input x, i.e., modeling P (x). This is associated implicitly with a prior, i.e., that P (y|x) and P (x) share structure, i.e., that learning about P (x) can help to generalize better on P (y | x). Obviously this needs not be the case in general, e.g., if y is an effect of x. However, if y is a cause of x, then we would expect this a priori assumption to be correct, as discussed at greater length in Section 16.4 in the context of semi-supervised learning. A disadvantage of unsupervised pre-training is that it is difficult to choose the capacity hyperparameters (such as when to stop training) for the pre-training phases. An expensive option is to try many different values of these hyperparameters and choose the one which gives the best supervised learning error after fine-tuning. Another potential disadvantage is that unsupervised pre-training may require larger representations than what would be necessarily strictly for the task at hand, since presumably, y is only one of the factors that explain x. Today, as many deep learning researchers and practitioners have moved to working with very large labeled datasets, unsupervised pre-training has become less popular in favor of other forms of regularization such as dropout – to be discussed in section 7.11. Nevertheless, unsupervised pre-training remains an important tool in the deep learning toolbox and should particularly be considered when the number of labeled examples is low, such as in the semi-supervised, domain adaptation and transfer learning settings, discussed next.
16.2
Transfer learning and domain adaptation refer to the situation where what has been learned in one setting (i.e., distribution P 1 ) is exploited to improve generalization in another setting (say distribution P 2 ). In the case of transfer learning, we consider that the task is different but many of the factors that explain the variations in P1 are relevant to the variations that need to be captured for learning P2 . This is typically understood in a supervised learning context, where the input is the same but the target may be of a different nature, e.g., learn about visual categories that are different in the first and the second setting. If there is a lot more data in the first setting (sampled from P1 ), then that may help to learn representations that are useful to quickly generalize when examples of P2 are drawn. For example, many visual categories share lowlevel notions of edges and visual shapes, the effects of geometric changes, changes in lighting, etc. In general, transfer learning, multi-task learning (Section 7.12), and domain adaptation can be achieved via representation learning when there 439
CHAPTER 16. REPRESENTATION LEARNING
exist features that would be useful for the different settings or tasks, i.e., there are shared underlying factors. This is illustrated in Figure 7.6, with shared lower layers and task-dependent upper layers. However, sometimes, what is shared among the different tasks is not the semantics of the input but the semantics of the output, or maybe the input needs to be treated differently (e.g., consider user adaptation or speaker adaptation). In that case, it makes more sense to share the upper layers (near the output) of the neural network, and have a task-specific pre-processing, as illustrated in Figure 16.5.
Y
selection switch
h2
h1
X1
X2
h3
X3
Figure 16.5: Example of architecture for multi-task or transfer learning when the output variable Y has the same semantics for all tasks while the input variable X has a different meaning (and possibly even a different dimension) for each task (or, for example, each user), called X 1, X 2 and X3 for three tasks in the figure. The lower levels (up to the selection switch) are task-specific, while the upper levels are shared. The lower levels learn to translate their task-specific input into a generic set of features.
In the related case of domain adaptation, we consider that the task (and the optimal input-to-output mapping) is the same but the input distribution is slightly different. For example, if we predict sentiment (positive or negative judgement) associated with textual comments posted on the web, the first setting may refer to 440
CHAPTER 16. REPRESENTATION LEARNING
consumer comments about books, videos and music, while the second setting may refer to televisions or other products. One can imagine that there is an underlying function that tells whether any statement is positive, neutral or negative, but of course the vocabulary, style, accent, may vary from one domain to another, making it more difficult to generalize across domains. Simple unsupervised pretraining (with denoising auto-encoders) has been found to be very successful for sentiment analysis with domain adaptation (Glorot et al., 2011c). A related problem is that of concept drift, which we can view as a form of transfer learning due to gradual changes in the data distribution over time. Both concept drift and transfer learning can be viewed as particular forms of multi-task learning (Section 7.12). Whereas multi-task learning is typically considered in the context of supervised learning, the more general notion of transfer learning is applicable for unsupervised learning and reinforcement learning as well. Figure 7.6 illustrates an architecture in which different tasks share underlying features or factors, taken from a larger pool that explain the variations in the input. In all of these cases, the objective is to take advantage of data from a first setting to extract information that may be useful when learning or even when directly making predictions in the second setting. One of the potential advantages of representation learning for such generalization challenges, and especially of deep representation learning, is that it may considerably help to generalize by extracting and disentangling a set of explanatory factors from data of the first setting, some of which may be relevant to the second setting. In the case of object recognition from an image, many of the factors of variation that explain visual categories in natural images remain the same when we move from one set of categories to another. This discussion raises a very interesting and important question which is one of the core questions of this book: what is a good representation? Is it possible to learn representations that disentangle the underlying factors of variation? This theme is further explored at the end of this chapter (Section 16.4 and beyond). We claim that learning the most abstract features helps to maximize our chances of success in transfer learning, domain adaptation, or concept drift. More abstract features are more general and more likely to be close to the underlying causal factor, i.e., be relevant over many domains, many categories, and many time periods. A good example of the success of unsupervised deep learning for transfer learning is its success in two competitions that took place in 2011, with results presented at ICML 2011 (and IJCNN 2011) in one case (Mesnil et al., 2011) (the Transfer Learning Challenge, http://www.causality.inf.ethz.ch/unsupervised-learning.php) and at NIPS 2011 (Goodfellow et al., 2011) in the other case (the Transfer Learning Challenge that was held as part of the NIPS’2011 workshop on Challenges in 441
CHAPTER 16. REPRESENTATION LEARNING
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.6: Results obtained on the Sylvester validation set (Transfer Learning Challenge). From left to right and top to bottom, respectively 0, 1, 2, 3, and 4 pre-trained layers. Horizontal axis is logarithm of number of labeled training examples on transfer setting (test task). Vertical axis is Area Under the Curve, which reflects classification accuracy. With deeper representations (learned unsupervised), the learning curves considerably improve, requiring fewer labeled examples to achieve the best generalization. 443
CHAPTER 16. REPRESENTATION LEARNING
output f t (x)
task specification t input x Figure 16.7: Figure illustrating how zero-data or zero-shot learning is possible. The trick is that the new context or task on which no example is given but on which we want a prediction is represented (with an input t), e.g., with a set of features, i.e., a distributed representation, and that representation is used by the predictor f t (x). If t was a one-hot vector for each task, then it would not be possible to generalize to a new task, but with a distributed representation the learner can benefit from the meaning of the individual task features (as they influence the relationship between inputs x and targets y, say), learned on other tasks for which examples are available.
images to words for which no labeled images were previously shown to the learner. A similar phenomenon happens in machine translation (Klementiev et al., 2012; Mikolov et al., 2013; Gouws et al., 2014): we have words in one language, and the relationships between words can be learned from unilingual corpora; on the other hand, we have translated sentences which relate words in one language with words in the other. Even though we may not have labeled examples translating word A in language X to word B in language Y, we can generalize and guess a translation for word A because we have learned a distributed representation for words in language X, a distributed representation for words in language Y, and created a link (possibly two-way) relating the two spaces, via translation examples. Note that this transfer will be most successful if all three ingredients (the two representations and the relations between them) are learned jointly.
444
CHAPTER 16. REPRESENTATION LEARNING
X
Y Figure 16.8: Transfer learning between two domains corresponds to zero-shot learning. A first set of data (dashed arrows) can be used to relate examples in one domain (top left, X) and fix a relationship between their representations, a second set of data (dotted arrows) can be used to similarly relate examples and their representation in the other domain (bottom right, Y ), while a third dataset (full large arrows) anchors the two representations together, with examples consisting of pairs (x, y) taken from the two domains. In this way, one can for example associate an image to a word, even if no images of that word were ever presented, simply because word-representations (top) and image-representations (bottom) have been learned jointly with a two-way relationship between them.
This is illustrated in Figure 16.8, where we see that zero-shot learning is a particular form of transfer learning. The same principle explains how one can perform multi-modal learning, capturing a representation in one modality, a representation in the other, and the relationship (in general a joint distribution) between pairs (x, y) consisting of one observation x in one modality and another observation y in the other modality (Srivastava and Salakhutdinov, 2012). By learning all three sets of parameters (from x to its representation, from y to its 445
CHAPTER 16. REPRESENTATION LEARNING
representation, and the relationship between the two representation), concepts in one map are anchored in the other, and vice-versa, allowing one to meaningfully generalize to new pairs.
16.3
Semi-Supervised Learning
As discussed in Section 16.1.1 on the advantages of unsupervised pre-training, unsupervised learning can have a regularization effect in the context of supervised learning. This fits in the more general category of combining unlabeled examples with unknown distribution P (x) with labeled examples (x, y), with the objective of estimating P (y | x). Exploiting unlabeled examples to improve performance on a labeled set is the driving idea behind semi-supervised learning (Chapelle et al., 2006). For example, one can use unsupervised learning to map X into a representation (also called embedding) such that two examples x 1 and x2 that belong to the same cluster (or are reachable through a short path going through neighboring examples in the training set) end up having nearby embeddings. One can then use supervised learning (e.g., a linear classifier) in that new space and achieve better generalization in many cases (Belkin and Niyogi, 2002; Chapelle et al., 2003). A long-standing variant of this approach is the application of Principal Components Analysis as a pre-processing step before applying a classifier (on the projected data). In these models, the data is first transformed in a new representation using unsupervised learning, and a supervised classifier is stacked on top, learning to map the data in this new representation into class predictions. Instead of having separate unsupervised and supervised components in the model, one can consider models in which P (x) (or P (x, y)) and P (y | x) share parameters (or whose parameters are connected in some way), and one can tradeoff the supervised criterion − log P (y | x) with the unsupervised or generative one (− log P (x) or − log P (x, y)). It can then be seen that the generative criterion corresponds to a particular form of prior (Lasserre et al., 2006), namely that the structure of P (x) is connected to the structure of P (y | x) in a way that is captured by the shared parametrization. By controlling how much of the generative criterion is included in the total criterion, one can find a better trade-off than with a purely generative or a purely discriminative training criterion (Lasserre et al., 2006; Larochelle and Bengio, 2008b). In the context of deep architectures, a very interesting application of these ideas involves adding an unsupervised embedding criterion at each layer (or only one intermediate layer) to a traditional supervised criterion (Weston et al., 2008). This has been shown to be a powerful semi-supervised learning strategy, and is an alternative to the unsupervised pre-training approach described earlier in this chapter, which also combine unsupervised learning with supervised learning. 446
CHAPTER 16. REPRESENTATION LEARNING
In the context of scarcity of labeled data (and abundance of unlabeled data), deep architectures have shown promise as well. Salakhutdinov and Hinton (2008) describe a method for learning the covariance matrix of a Gaussian Process, in which the usage of unlabeled examples for modeling P (x) improves P (y | x) quite significantly. Note that such a result is to be expected: with few labeled samples, modeling P (x) usually helps, as argued below (Section 16.4). These results show that even in the context of abundant labeled data, unsupervised pretraining can have a pronounced positive effect on generalization: a somewhat surprising conclusion.
16.4
Semi-Supervised Learning and Disentangling Underlying Causal Factors
What we put forward as a hypothesis, going a bit further, is that an ideal representation is one that disentangles the underlying causal factors of variation that generated the observed data. Note that this may be different from “easy to model”, but we further assume that for most problems of interest, these two properties coincide: once we “understand” the underlying explanations for what we observe, it generally becomes easy to predict one thing from others. A very basic question is whether unsupervised learning on input variables x can yield representations that are useful when later trying to learn to predict some target variable y, given x. More generally, when does semi-supervised learning work? See also Section 16.3 for an earlier discussion. It turns out that the answer to this question is very different dependent on the underlying relationship between x and y. Put differently, the question is whether P (y | x), seen as a function of x has anything to do with P (x). If not, then unsupervised learning of P (x) can be of no help to learn P (y | x). Consider for example the case where P (x) is uniformly distributed and E[y | x] is some function of interest. Clearly, observing x alone gives us no information about P (y | x). As a better case, consider the situation where x arises from a mixture, with one mixture component per value of y, as illustrated in Figure 16.9. If the mixture components are well separated, then modeling P (x) tells us precisely where each component is, and a single labeled example of each example will then be enough to perfectly learn P (y | x). But more generally, what could make P (y | x) and P (x) tied together? If y is closely associated with one of the causal factors of x, then, as first argued by Janzing et al. (2012), P (x) and P (y | x) will be strongly tied, and unsupervised representation learning that tries to disentangle the underlying factors of variation is likely to be useful as a semi-supervised learning strategy. Consider the assumption that y is one of the causal factors of x, and let h 447
CHAPTER 16. REPRESENTATION LEARNING
P (x)
y=1
y=2
y=2
x Figure 16.9: Example of a density over x that is a mixture over three components. The component identity is an underlying explanatory factor, y. Because the mixture components (e.g., natural object classes in image data) are statistically salient, just modeling P (x) in an unsupervised way with no labeled example already reveals the factor y.
represent all those factors. Then the true generative process can be conceived as structured according to this directed graphical model, with h as the parent of x: P (h, x) = P (x | h)P (h). As a consequence, the data has marginal probability Z P (x) = P (x | h)p(h)dh or, in the discrete case (like in the mixture example above): X P (x) = P (x | h)P (h). h
From this straightforward observation, we conclude that the best possible model of x (from a generalization point of view) is the one that uncovers the above “true” structure, with h as a latent variable that explains the observed variations in x. The “ideal” representation learning discussed above should thus recover these latent factors. If y is one of them (or closely related to one of them), then it will be very easy to learn to predict y from such a representation. We also see that the conditional distribution of y given x is tied by Bayes rule to the components in the above equation: P (y | x) =
P (x | y)P (y) . P (x) 448
CHAPTER 16. REPRESENTATION LEARNING
Thus the marginal P (x) is intimately tied to the conditional P (y | x) and knowledge of the structure of the former should be helpful to learn the latter, i.e., semi-supervised learning works. Furthermore, not knowing which of the factors in h will be the one of the interest, say y = hi , an unsupervised learner should learn a representation that disentangles all the generative factors h j from each other, then making it easy to predict y from h. In addition, as pointed out by Janzing et al. (2012), if the true generative process has x as an effect and y as a cause, then modeling P (x | y) is robust to changes in P (y). If the cause-effect relationship was reversed, it would not be true, since by Bayes rule, P (x | y) would be sensitive to changes in P (y). Very often, when we consider changes in distribution due to different domains, temporal nonstationarity, or changes in the nature of the task, the causal mechanisms remain invariant (“the laws of the universe are constant”) whereas what changes are the marginal distribution over the underlying causes (or what factors are linked to our particular task). Hence, better generalization and robustness to all kinds of changes can be expected via learning a generative model that attempts to recover the causal factors h and P (x | h).
16.5
Assumption of Underlying Factors and Distributed Representation
A very basic notion that comes out of the above discussion and of the notion of “disentangled factors” is the very idea that there are underlying factors that generate the observed data. It is a core assumption behind most neural network and deep learning research, more precisely relying on the notion of distributed representation. What we call a distributed representation is one which can express an exponentially large number of concepts by allowing to compose the activation of many features. An example of distributed representation is a vector of n binary features, which can take 2 n configurations, each potentially corresponding to a different region in input space. This can be compared with a symbolic representation, where the input is associated with a single symbol or category. If there are n symbols in the dictionary, one can imagine n feature detectors, each corresponding to the detection of the presence of the associated category. In that case only n different configurations of the representation-space are possible, carving n different regions in input space. Such a symbolic representation is also called a one-hot representation, since it can be captured by a binary vector with n bits that are mutually exclusive (only one of them can be active). These ideas are developed further in the next section. Examples of learning algorithms based on non-distributed representations in449
CHAPTER 16. REPRESENTATION LEARNING
clude: • Clustering methods, including the k-means algorithm: only one cluster “wins” the competition. • k-nearest neighbors algorithms: only one template or prototype example is associated with a given input. • Decision trees: only one leaf (and the nodes on the path from root to leaf) is activated when an input is given. • Gaussian mixtures and mixtures of experts: the templates (cluster centers) or experts are now associated with a degree of activation, which makes the posterior probability of components (or experts) given input look more like a distributed representation. However, as discussed in the next section, these models still suffer from a poor statistical scaling behavior compared to those based on distributed representations (such as products of experts and RBMs). • Kernel machines with a Gaussian kernel (or other similarly local kernel): although the degree of activtion of each “support vector” or template example is now continuous-valued, the same issue arises as with Gaussian mixtures. • Language or translation models based on N-grams: the set of contexts (sequences of symbols) is partitioned according to a tree structure of suffixes (e.g. a leaf may correspond to the last two words being w1 and w2 ), and separate parameters are estimated for each leaf of the tree (with some sharing being possible of parameters associated with internal nodes, between the leaves of the sub-tree rooted at the same internal node).
450
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.10: Illustration of how a learning algorithm based on a non-distributed representation breaks up the input space into regions, with a separate set of parameters for each region. For example, a clustering algorithm or a 1-nearest-neighbor algorithm associates one template (colored X) to each region. This is also true of decision trees, mixture models, and kernel machines with a local (e.g., Gaussian) kernel. In the latter algorithms, the output is not constant by parts but instead interpolates between neighboring regions, but the relationship between the number of parameters (or examples) and the number of regions they can define remains linear. The advantage is that a different answer (e.g., density function, predicted output, etc.) can be independently chosen for each region. The disadvantage is that there is no generalization to new regions, except by extending the answer for which there is data, exploiting solely a smoothness prior. It makes it difficult to learn a complicated function, with more ups and downs than the available number of examples. Contrast this with a distributed representation, Figure 16.11.
An important related concept that distinguishes a distributed representation from a symbolic one is that generalization arises due to shared attributes between different concepts. As pure symbols, “ tt cat” and “dog” are as far from each other as any other two symbols. However, if one associates them with a meaningful distributed representation, then many of the things that can be said about cats can generalize to dogs and vice-versa. This is what allows neural language models to generalize so well (Section 12.4). Distributed representations induce a rich similarity space, in which semantically close concepts (or inputs) are close in distance, a property that is absent from purely symbolic representations. Of 451
CHAPTER 16. REPRESENTATION LEARNING
course, one would get a distributed representation if one would associated multiple symbolic attributes to each symbol.
Figure 16.11: Illustration of how a learning algorithm based on a distributed representation breaks up the input space into regions, with exponentially more regions than parameters. Instead of a single partition (as in the non-distributed case, Figure 16.10), we have many partitions, one per “feature”, and all their possible intersections. In the example of the figure, there are 3 binary features C1, C2, and C3, each corresponding to partitioning the input space in two regions according to a hyperplane, i.e., each is a linear classifier. Each possible intersection of these half-planes forms a region, i.e., each region corresponds to a configuration of the bits specifying whether each feature is 0 or 1, on which side of their hyperplane is the input falling. If the input space is large enough, the number of regions grows exponentially with the number of features, i.e., of parameters. However, the way these regions carve the input space still depends on few parameters: this huge number of regions are not placed independently of each other. We can thus represent a function that looks complicated but actually has structure. Basically, the assumption is that one can learn about each feature without having to see the examples for all the configurations of all the other features, i.e., these features corespond to underlying factors explaining the data.
Note that a sparse representation is a distributed representation where the number of attributes that are active together is small compared to the total number of attributes. For example, in the case of binary representations, one might 452
CHAPTER 16. REPRESENTATION LEARNING
have only k n of the n bits that are non-zero. The power of representation grows exponentially with the number of active attributes, e.g., O(nk ) in the above example of binary vectors. At the extreme, a symbolic representation is a very sparse representation where only one attribute at a time can be active.
16.6
Exponential Gain in Representational Efficiency from Distributed Representations
When and why can there be a statistical advantage from using a distributed representation as part of a learning algorithm? Figures 16.10 and 16.11 explain that advantage in intuitive terms. The argument is that a function that “looks complicated” can be compactly represented using a small number of parameters, if some “structure” is uncovered by the learner. Traditional “non-distributed” learning algorithms generalize only due to the smoothness assumption, which states that if u ≈ v, then the target function f to be learned has the property that f (u) ≈ f(v), in general. There are many ways of formalizing such an assumption, but the end result is that if we have an example (x, y) for which we know that f(x) ≈ y, then we choose an estimator fˆ that approximately satisfies these constraints while changing as little as possible. This assumption is clearly very useful, but it suffers from the curse of dimensionality: in order to learn a target function that takes many different values (e.g. many ups and downs) in a large number of regions2 , we may need a number of examples that is at least as large as the number of distinguishible regions. One can think of each of these regions as a category or symbol: by having a separate degree of freedom for each symbol (or region), we can learn an arbitrary mapping from symbol to value. However, this does not allow us to generalize to new symbols, new regions. If we are lucky, there may be some regularity in the target function, besides being smooth. For example, the same pattern of variation may repeat itself many times (e.g., as in a periodic function or a checkerboard). If we only use the smoothness prior, we will need additional examples for each repetition of that pattern. However, as discussed by Montufar et al. (2014), a deep architecture could represent and discover such a repetition pattern and generalize to new instances of it. Thus a small number of parameters (and therefore, a small number of examples) could suffice to represent a function that looks complicated (in the sense that it would be expensive to represent with a non-distributed architecture). Figure 16.11 shows a simple example, where we have n binary features 2
e.g., exponentially many regions: in a d-dimensional space with at least 2 different values to distinguish per dimension, we might want f to differ in 2d different regions, requiring O(2 d) training examples. 453
CHAPTER 16. REPRESENTATION LEARNING
in a d-dimensional space, and where each binary feature corresponds to a linear classifier that splits the input space in two parts. The exponentially large number of intersections of n of the corresponding half-spaces corresponds to as many distinguishable regions that a distributed representation learner could capture. How many regions are generated by an arrangement of n hyperplanes in Rd ? This corresponds to the number of regions that a shallow neural network (one hidden layer) can distinguish (Pascanu et al., 2014b), which is d X n = O(nd), j j=0
following a more general result from Zaslavsky (1975), known as Zaslavsky’s theorem, one of the central results from the theory of hyperplane arrangements. Therefore, we see a growth that is exponential in the input size and polynomial in the number of hidden units. Although a distributed representation (e.g. a shallow neural net) can represent a richer function with a smaller number of parameters, there is no free lunch: to construct an arbitrary partition (say with 2 d different regions) one will need a correspondingly large number of hidden units, i.e., of parameters and of examples. The use of a distributed representation therefore also corresponds to a prior, which comes on top of the smoothness prior. To return to the hyperplanes examples of Figure 16.11, we see that we are able to get this generalization because we can learn about the location of each hyperplane with only O(d) examples: we do not need to see examples corresponding to all O(n d) regions. Let us consider a concrete example. Imagine that the input is the image of a person, and that we have a classifier that detects whether the person is a child or not, another that detects if that person is a male or a female, another that detects whether that person wears glasses or not, etc. Keep in mind that these features are discovered automatically, not fixed a priori. We can learn about the male vs female distinction, or about the glasses vs no-classes case, without having to consider all of the configurations of the n features. This form of statistical separability is what allows one to generalize to new configurations of a person’s features that have never been seen during training. It corresponds to the prior discussed above regarding the existence of multiple underlying explanatory factors. This prior is very plausible for most of the data distributions on which human intelligence would be useful, but it may not apply to every possible distribution. However, this apparently innocuous assumption buys us a lot, statistically speaking, because it allows the learner to discover structure with a reasonably small number of examples that would otherwise require exponentially more training data. Another interesting result illustrating the statistical effect of a distributed representations versus a non-distributed one is the mathematical analysis (Montufar 454
CHAPTER 16. REPRESENTATION LEARNING
and Morton, 2014) of products of mixtures (which include the RBM as a special case) versus mixture of products (such as the mixture of Gaussians). The analysis shows that a mixture of products can require an exponentially larger number of parameters in order to represent the probability distributions arising out of a product of mixtures.
16.7
Exponential Gain in Representational Efficiency from Depth
In the above example with the input being an image of a person, it would not be reasonable to expect factors such as gender, age, and the presence of glasses to be detected simply from a linear classifier, i.e., a shallow neural network. The kinds of factors that can be chosen almost independently in order to generate data are more likely to be very high-level and related in highly non-linear ways to the input. This demands deep distributed representations, where the higher level features (seen as functions of the input) or factors (seen as generative causes) are obtained through the composition of many non-linearities. It turns out that organizing computation through the composition of many non-linearities and a hierarchy of reused features can give another exponential boost to statistical efficiency. Although 2-layer networks (e.g., with saturating non-linearities, boolean gates, sum/products, or RBF units) can generally be shown to be universal approximators3 , the required number of hidden units may be very large. The main results on the expressive power of deep architectures state that there are families of functions that can be represented efficiently with a deep architecture (say depth k) but would require an exponential number of components (with respect to the input size) with insufficient depth (depth 2 or depth k − 1). More precisely, a feedforward neural network with a single hidden layer is a universal approximator (of Borel measurable functions) (Hornik et al., 1989; Cybenko, 1989). Other works have investigated universal approximation of probability distributions by deep belief networks (Le Roux and Bengio, 2010; Mont´ ufar and Ay, 2011), as well as their approximation properties (Mont´ ufar, 2014; Krause et al., 2013). Regarding the advantage of depth, early theoretical results have focused on circuit operations (neural net unit computations) that are substantially different from those being used in real state-of-the-art deep learning applications, such as logic gates (H˚ astad, 1986) and linear threshold units with non-negative weights (H˚ astad and Goldmann, 1991). More recently, Delalleau and Bengio 3
with enough hidden units they can approximate a large class of functions (e.g. continuous functions) up to some given tolerance level 455
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.12: A sum-product network (Poon and Domingos, 2011) composes summing units and product units, so that each node computes a polynomial. Consider the product node computing x2 x3 : its value is reused in its two immediate children, and indirectly incorporated in its grand-children. In particular, in the top node shown the product x 2x 3 would arise 4 times if that node’s polynomial was expanded as a sum of products. That number could double for each additional layer. In general a deep sum of product can represent polynomials with a number of min-terms that is exponential in depth, and some families of polynomials are represented efficiently with a deep sum-product network but not efficiently representable with a simple sum of products, i.e., a 2-layer network (Delalleau and Bengio, 2011).
(2011) showed that a shallow network requires exponentially many more sumproduct hidden units 4 than a deep sum-product network (Poon and Domingos, 2011) in order to compute certain families of polynomials. Figure 16.12 illustrates a sum-product network for representing polynomials, and how a deeper network can be exponentially more efficient because the same computation can be reused exponentially (in depth) many times. Note however that Martens and Medabalimi (2014) showed that sum-product networks may be have limitations in their expressive power, in the sense that there are distributions that can easily be represented by other generative models but that cannot be efficiently represented under the decomposability and completeness conditions associated with the probabilistic interpretation of sum-product networks (Poon and Domingos, 2011). Closer to the kinds of deep networks actually used in practice (Pascanu et al., 2014a; Montufar et al., 2014) showed that piecewise linear networks (e.g. obtained from rectifier non-linearities or maxout units) could represent functions 4
Here, a single sum-product hidden layer summarizes a layer of product units followed by a layer of sum units. 456
CHAPTER 16. REPRESENTATION LEARNING
Figure 16.13: An absolute value rectification unit has the same output for every pair of mirror points in its input. The mirror axis of symmetry is given by the hyperplane defined by the weights and bias of the unit. If one considers a function computed on top of that unit (the green decision surface), it will be formed of a mirror image of a simpler pattern, across that axis of symmetry. The middle image shows how it can be obtained by folding the space around that axis of symmetry, and the right image shows how another repeating pattern can be folded on top of it (by another downstream unit) to obtain another symmetry (which is now repeated four times, with two hidden layers). This is an intuitive explanation of the exponential advantage of deeper rectifier networks formally shown in Pascanu et al. (2014a); Montufar et al. (2014).
with exponentially more piecewise-linear regions, as a function of depth, compared to shallow neural networks. Figure 16.13 illustrates how a network with absolute value rectification creates mirror images of the function computed on top of some hidden unit, with respect to the input of that hidden unit. Each hidden unit specifies where to fold the input space in order to create mirror responses (on both sides of the absolute value non-linearity). By composing these folding operations, we obtain an exponentially large number of piecewise linear regions which can capture all kinds of regular (e.g. repeating) patterns. More precisely, the main theorem in Montufar et al. (2014) states that the number of linear regions carved out by a deep rectifier network with d inputs, depth L, and n units per hidden layer, is d(L−1) ! n O nd , d i.e., exponential in the depth L. In the case of maxout networks with k filters per unit, the number of linear regions is (L−1)+d O k .
16.8
Priors Regarding The Underlying Factors
To close this chapter, we come back to the original question: what is a good representation? We proposed that an ideal representation is one that disentangles 457
CHAPTER 16. REPRESENTATION LEARNING
the underlying causal factors of variation that generated the data, especially those factors that we care about in our applications. It seems clear that if we have direct clues about these factors (like if a factor y = hi , a label, is observed at the same time as an input x), then this can help the learner separate these observed factors from the others. This is already what supervised learning does. But in general, we may have a lot more unlabeled data than labeled data: can we use other clues, other hints about the underlying factors, in order to disentangle them more easily? What we propose here is that indeed we can provide all kinds of broad priors which are as many hints that can help the learner discover, identify and disentangle these factors. The list of such priors is clearly not exhaustive, but it is a starting point, and yet most learning algorithms in the machine learning literature only exploit a small subset of these priors. With absolutely no priors, we know that it is not possible to generalize: this is the essence of the no-free-lunch theorem for machine learning. In the space of all functions, which is huge, with any finite training set, there is no general-purpose learning recipe that would dominate all other learning algorithms. Whereas some assumptions are required, when our goal is to build AI or understand human intelligence, it is tempting to focus our attention on the most general and broad priors, that are relevant for most of the tasks that humans are able to successfully learn. This list was introduced in section 3.1 of Bengio et al. (2013c). • Smoothness: we want to learn functions f s.t. x ≈ y generally implies f (x) ≈ f(y). This is the most basic prior and is present in most machine learning, but is insufficient to get around the curse of dimensionality, as discussed abov and in Bengio et al. (2013c). below. • Multiple explanatory factors: the data generating distribution is generated by different underlying factors, and for the most part what one learns about one factor generalizes in many configurations of the other factors. This assumption is behind the idea of distributed representations, discussed in Section 16.5 above. • Depth, or a hierarchical organization of explanatory factors: the concepts that are useful at describing the world around us can be defined in terms of other concepts, in a hierarchy, with more abstract concepts higher in the hierarchy, being defined in terms of less abstract ones. This is the assumption exploited by having deep representations. • Causal factors: the input variables x are consequences, effects, while the explanatory factors are causes, and not vice-versa. As discussed above, this enables the semi-supervised learning assumption, i.e., that P (x) is tied 458
CHAPTER 16. REPRESENTATION LEARNING
to P (y | x), making it possible to improve the learning of P (y | x) via the learning of P (x). More precisely, this entails that representations that are useful for P (x) are useful when learning P (y | x), allowing sharing of statistical strength between the unsupervised and supervised learning tasks. • Shared factors across tasks: in the context where we have many tasks, corresponding to different yi’s sharing the same input x or where each task is associated with a subset or a function fi (x) of a global input x, the assumption is that each yi is associated with a different subset from a common pool of relevant factors h. Because these subsets overlap, learning all the P (yi | x) via a shared intermediate representation P (h | x) allows sharing of statistical strength between the tasks. • Manifolds: probability mass concentrates, and the regions in which it concentrates are locally connected and occupy a tiny volume. In the continuous case, these regions can be approximated by low-dimensional manifolds that a much smaller dimensionality than the original space where the data lives. This is the manifold hypothesis and is covered in Chapter 17, especially with algorithms related to auto-encoders. • Natural clustering: different values of categorical variables such as object classes5 are associated with separate manifolds. More precisely, the local variations on the manifold tend to preserve the value of a category, and a linear interpolation between examples of different classes in general involves going through a low density region, i.e., P (x | y = i) for different i tend to be well separated and not overlap much. For example, this is exploited explicitly in the Manifold Tangent Classifier discussed in Section 17.5. This hypothesis is consistent with the idea that humans have named categories and classes because of such statistical structure (discovered by their brain and propagated by their culture), and machine learning tasks often involves predicting such categorical variables. • Temporal and spatial coherence: this is similar to the cluster assumption but concerns sequences or tuples of observations; consecutive or spatially nearby observations tend to be associated with the same value of relevant categorical concepts, or result in a small move on the surface of the high-density manifold. More generally, different factors change at different temporal and spatial scales, and many categorical concepts of interest change slowly. When attempting to capture such categorical variables, this prior can be enforced by making the associated representations slowly 5
it is often the case that the y of interest is a category 459
CHAPTER 16. REPRESENTATION LEARNING
changing, i.e., penalizing changes in values over time or space. This prior was introduced in Becker and Hinton (1992). • Sparsity: for any given observation x, only a small fraction of the possible factors are relevant. In terms of representation, this could be represented by features that are often zero (as initially proposed by Olshausen and Field (1996)), or by the fact that most of the extracted features are insensitive to small variations of x. This can be achieved with certain forms of priors on latent variables (peaked at 0), or by using a non-linearity whose value is often flat at 0 (i.e., 0 and with a 0 derivative), or simply by penalizing the magnitude of the Jacobian matrix (of derivatives) of the function mapping input to representation. This is discussed in Section 15.8. • Simplicity of Factor Dependencies: in good high-level representations, the factors are related to each other through simpleQdependencies. The simplest possible is marginal independence, P (h) = i P (hi ), but linear dependencies or those captured by a shallow auto-encoder are also reasonable assumptions. This can be seen in many laws of physics, and is assumed when plugging a linear predictor or a factorized prior on top of a learned representation.
460
Chapter 17
The Manifold Perspective on Representation Learning Manifold learning is an approach to machine learning that is capitalizing on the manifold hypothesis (Cayton, 2005; Narayanan and Mitter, 2010): the data generating distribution is assumed to concentrate near regions of low dimensionality. The notion of manifold in mathematics refers to continuous spaces that locally resemble Euclidean space, and the term we should be using is really submanifold, which corresponds to a subset which has a manifold structure. The use of the term manifold in machine learning is much looser than its use in mathematics, though: • the data may not be strictly on the manifold, but only near it, • the dimensionality may not be the same everywhere, • the notion actually referred to in machine learning naturally extends to discrete spaces. Indeed, although the very notions of a manifold or submanifold are defined for continuous spaces, the more general notion of probability concentration applies equally well to discrete data. It is a kind of informal prior assumption about the data generating distribution that seems particularly well-suited for AI tasks such as those involving images, video, speech, music, text, etc. In all of these cases the natural data has the property that randomly choosing configurations of the observed variables according to a factored distribution (e.g. uniformly) are very unlikely to generate the kind of observations we want to model. What is the probability of generating a natural looking image by choosing pixel intensities independently of each other? What is the probability of generating a meaningful natural language paragraph by independently choosing each character in a 461
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
Figure 17.1: Top: data sampled from a distribution in a high-dimensional space (one 2 dimensions shown for illustration) that is actually concentrated near a one-dimensional manifold, which here is like a twisted string. Bottom: the underlying manifold that the learner should infer.
string? Doing a thought experiment should give a clear answer: an exponentially tiny probability. This is because the probability distribution of interest concentrates in a tiny volume of the total space of configurations. That means that to the first degree, the problem of characterizing the data generating distribution can be reduced to a binary classification problem: is this configuration probable or not?. Is this a grammatically and semantically plausible sentence in English? Is this a natural-looking image? Answering these questions tells us much more about the nature of natural language or text than the additional information one would have by being able to assign a precise probability to each possible sequence of characters or set of pixels. Hence, simply characterizing where probability concentrates is a fundamental importance, and this is what manifold learning algorithms attempt to do. Because it is a where question, it is more about geometry than about probability distributions, although we find both views useful 462
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
when designing learning algorithms for AI tasks. tangent directions tangent plane
Data on a curved manifold
Figure 17.2: A two-dimensional manifold near which training examples are concentrated, along with a tangent plane and its associated tangent directions, forming a basis that specify the directions of small moves one can make to stay on the manifold.
Figure 17.3: Illustration of tangent vectors of the manifold estimated by a contractive auto-encoder (CAE), at some input point (top left, image of a zero). Each image on the top right corresponds to a tangent vector. They are obtained by picking the dominant ∂f (x) singular vectors (with largest singular value) of the Jacobian ∂x (see Section 15.10). Taking the original image plus a small quantity of any of these tangent vectors yields another plausible image, as illustrated in the bottom. The leading tangent vectors seem to correspond to small deformations, such as translation, or shifting ink around locally in the original image. Reproduced with permission from the authors of Rifai et al. (2011a).
In addition to the property of probability concentration, there is another one 463
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
that characterizes the manifold hypothesis: when a configuration is probable it is generally surrounded (at least in some directions) by other probable configurations. If a configuration of pixels looks like a natural image, then there are tiny changes one can make to the image (like translating everything by 0.1 pixel to the left) which yield another natural-looking image. The number of independent ways (each characterized by a number indicating how much or whether we do it) by which a probable configuration can be locally transformed into another probable configuration indicates the local dimension of the manifold. Whereas maximum likelihood procedures tend to concentrate probability mass on the training examples (which can each become a local maximum of probability when the model overfits), the manifold hypothesis suggests that good solutions instead concentrate probability along ridges of high probability (or their high-dimensional generalization) that connect nearby examples to each other. This is illustrated in Figure 17.1. What is most commonly learned to characterize a manifold is a representation of the data points on (or near, i.e. projected on) the manifold. Such a representation for a particular example is also called its embedding. It is typically given by a low-dimensional vector, with less dimensions than the “ambient” space of which the manifold is a low-dimensional subset. Some algorithms (non-parametric manifold learning algorithms, discussed below) directly learn an embedding for each training example, while others learn a more general mapping, sometimes called an encoder, or representation function, that maps any point in the ambient space (the input space) to its embedding. Another important characterization of a manifold is the set of its tangent planes. At a point x on a d-dimensional manifold, the tangent plane is given by d basis vectors that span the local directions of variation allowed on the manifold. As illustrated in Figure 17.2, these local directions specify how one can change x infinitesimally while staying on the manifold. Manifold learning has mostly focused on unsupervised learning procedures that attempt to capture these manifolds. Most of the initial machine learning research on learning non-linear manifolds has focused on non-parametric methods based on the nearest-neighbor graph. This graph has one node per training example and edges connecting near neighbors. Basically, these methods (Sch¨ olkopf et al., 1998; Roweis and Saul, 2000; Tenenbaum et al., 2000; Brand, 2003; Belkin and Niyogi, 2003; Donoho and Grimes, 2003; Weinberger and Saul, 2004; Hinton and Roweis, 2003; van der Maaten and Hinton, 2008a) associate each of these nodes with a tangent plane that spans the directions of variations associated with the difference vectors between the example and its neighbors, as illustrated in Figure 17.4. A global coordinate system can then be obtained through an optimization or 464
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
Figure 17.4: Non-parametric manifold learning procedures build a nearest neighbor graph whose nodes are training examples and arcs connect nearest neighbors. Various procedures can thus obtain the tangent plane associated with a neighborhood of the graph, and a coordinate system that associates each training example with a real-valued vector position, or embedding. It is possible to generalize such a representation to new examples by a form of interpolation. So long as the number of examples is large enough to cover the curvature and twists of the manifold, these approaches work well. Images from the QMUL Multiview Face Dataset (Gong et al., 2000).
solving a linear system. Figure 17.5 illustrates how a manifold can be tiled by a large number of locally linear Gaussian-like patches (or “pancakes”, because the Gaussians are flat in the tangent directions).
465
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
Figure 17.5: If the tangent plane at each location is known, then they can be tiled to form a global coordinate system or a density function. In the figure, each local patch can be thought of as a local Euclidean coordinate system or as a locally flat Gaussian, or “pancake”, with a very small variance in the directions orthogonal to the pancake and a very large variance in the directions defining the coordinate system on the pancake. The average of all these Gaussians would provide an estimated density function, as in the Manifold Parzen algorithm (Vincent and Bengio, 2003) or its non-local neural-net based variant (Bengio et al., 2006b).
tangent image tangent directions
high−contrast image
shifted image tangent image tangent directions
Figure 17.6: When the data are images, the tangent vectors can also be visualized like 466 images. Here we show the tangent vector associated with translation: it corresponds to the difference between an image and a slightly translated version. This basically extracts
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
However, there is a fundamental difficulty with such non-parametric neighborhoodbased approaches to manifold learning, raised in Bengio and Monperrus (2005): if the manifolds are not very smooth (they have many ups and downs and twists), one may need a very large number of training examples to cover each one of these variations, with no chance to generalize to unseen variations. Indeed, these methods can only generalize the shape of the manifold by interpolating between neighboring examples. Unfortunately, the manifolds of interest in AI have many ups and downs and twists and strong curvature, as illustrated in Figure 17.6. This motivates the use of distributed representations and deep learning for capturing manifold structure, which is the subject of this chapter.
Figure 17.7: Training examples of a face dataset – the QMUL Multiview Face Dataset (Gong et al., 2000) – for which the subjects were asked to move in such a way as to cover the two-dimensional manifold corresponding to two angles of rotation. We would like learning algorithms to be able to discover and disentangle such factors. Figure 17.8 illustrates such a feat.
The hope of many manifold learning algorithms, including those based on deep learning and auto-encoders, is that one learns an explicit or implicit coordinate system for the leading factors of variation that explain most of the structure in the unknown data generating distribution. An example of explicit coordinate system is one where the dimensions of the representation (e.g., the outputs of the encoder, i.e., of the hidden units that compute the “code” associated with the input) are directly the coordinates that map the unknown manifold. Training examples of a face dataset in which the images have been arranged visually on a 2-D manifold are shown in Figure 17.7, with the images laid down so that each of the two axes corresponds to one of the two angles of rotation of the face. However, the objective is to discover such manifolds, and Figure 17.8 illustrates the images generated by a variational auto-encoder (Kingma and Welling, 2014a) when the two-dimensional auto-encoder code (representation) is varied 467
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
on the 2-D plane. Note how the algorithm actually discovered two independent factors of variation: angle of rotation and emotional expression. Another kind of interesting illustration of manifold learning involves the discovery of distributed representations for words. Neural language models were initiated with the work of Bengio et al. (2001c, 2003b), in which a neural network is trained to predict the next word in a sequence of natural language text, given the previous words, and where each word is represented by a real-valued vector, called embedding or neural word embedding.
Figure 17.8: Two-dimensional representation space (for easier visualization), i.e., a Euclidean coordinate system for Frey faces (left) and MNIST digits (right), learned by a variational auto-encoder (Kingma and Welling, 2014a). Figures reproduced with permission from the authors. The images shown are not examples from the training set but images x actually generated by the model P (x | h), simply by changing the 2-D “code” h (each image corresponds to a different choice of “code” h on a 2-D uniform grid). On the left, one dimension that has been discovered (horizontal) mostly corresponds to a rotation of the face, while the other (vertical) corresponds to the emotional expression. The decoder deterministically maps codes (here two numbers) to images. The encoder maps images to codes (and adds noise, during training).
Figure 17.9 shows such neural word embeddings reduced to two dimensions (originally 50 or 100) using the t-SNE non-linear dimensionality reduction algorithm (van der Maaten and Hinton, 2008a). The figures zooms into different areas of the word-space and illustrates that words that are semantically and syntacti468
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
cally close end up having nearby embeddings.
Figure 17.9: Two-dimensional representation space (for easier visualization), of English words, learned by a neural language model as in Bengio et al. (2001c, 2003b), with t-SNE for the non-linear dimensionality reduction from 100 to 2. Different regions are zoomed to better see the details. At the global level one can identify big clusters corresponding to part-of-speech, while locally one sees mostly semantic similarity explaining the neighborhood structure.
17.1
Manifold Interpretation of PCA and Linear AutoEncoders
The above view of probabilistic PCA as a thin “pancake” of high probability is related to the manifold interpretation of PCA and linear auto-encoders, in which we are looking for projections of x into a subspace that preserves as much information as possible about x. This is illustrated in Figure 17.10. Let the encoder be h = f (x) = W > (x − µ) 469
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
computing such a projection, a low-dimensional representation of h. With the auto-encoder view, we have a decoder computing the reconstruction x = g(h) = b + V h. ˆ
Figure 17.10: Flat Gaussian capturing probability concentration near a low-dimensional manifold. The figure shows the upper half of the “pancake” above the “manifold plane” which goes through its middle. The variance in the direction orthogonal to the manifold is very small (upward red arrow) and can be considered like “noise”, where the other variances are large (larger red arrows) and correspond to “signal”, and a coordinate system for the reduced-dimension data.
It turns out that the choices of linear encoder and decoder that minimize reconstruction error E[||x − ˆ x||2] correspond to V = W , µ = b = E[x] and the rows of W form an orthonormal basis which spans the same subspace as the principal eigenvectors of the covariance matrix C = E[(x − µ)(x − µ)> ]. In the case of PCA, the rows of W are these eigenvectors, ordered by the magnitude of the corresponding eigenvalues (which are all real and non-negative). This is illustrated in Figure 17.11.
470
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
+/* +.*
!"#\$%&'!(#)\$%,x-* !"#\$%&'!(#)\$%*"!!\$!*+"#'\$!* x"
Figure 17.11: Manifold view of PCA and linear auto-encoders. The data distribution is concentrated near a manifold aligned with the leading eigenvectors (here, this is just v1 ) of the data covariance matrix. The other eigenvectors (here, just v2 ) are orthogonal to the manifold. A data point (in red, x) is encoded into a lower-dimensional representation or code h (here the scalar which indicates the position on the manifold, starting from h = 0). The decoder (transpose of the encoder) maps h to the data space, and corresponds to a point lying exactly on the manifold (green cross), the orthogonal projection of x on the manifold. The optimal encoder and decoder minimize the sum of reconstruction errors (difference vector between x and its reconstruction).
One can also show that eigenvalue λ i of C corresponds to the variance of x in the direction of eigenvector v i . If x ∈ RD and h ∈ R d with d < D, then the optimal reconstruction error (choosing µ, b, V and W as above) is 2
min E[||x − ˆ x|| ] =
D X
λi .
i=d+1
Hence, if the covariance has rank d, the eigenvalues λ d+1 to λD are 0 and reconstruction error is 0. Furthermore, one can also show that the above solution can be obtained by maximizing the variances of the elements of h, under orthonormal W , instead of minimizing reconstruction error.
471
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
17.2
Manifold Interpretation of Sparse Coding
Sparse coding was introduced in Section 15.6.2 a linear factors generative model. It also has an interesting manifold learning interpretation. The codes h inferred with the above equation do not fill the space in which h lives. Instead, probability mass is concentrated on axis-aligned subspaces: sets of values of h for which most of the axes are set at 0. We can thus decompose h into two pieces of information: P • A binary pattern β which specifies which hi are non-zero, with N a = i βi the number of “active” (non-zero) dimensions. • A variable-length real-valued vector α ∈ R Na which specifies the coordinates for each of the active dimensions. The pattern β can be viewed as specifying an Na -dimensional region in input space (the set of x = W h + b where hi = 0 if bi = 0). That region is actually a linear manifold, an Na -dimensional hyperplane. All those hyperplanes go through a “center” x = b. The vector α then specifies a Euclidean coordinate on that hyperplane. Because the prior P (h) is concentrated around 0, the probability mass of P (x) is concentrated on the regions of these hyperplanes near x = b. Depending on the amount of reconstruction error (output variance for P (x | g(h))), there is also probability mass bleeding around these hyperplanes and making them look more like pancakes. Each of these hyperplane-aligned manifolds and the associated distribution is just like the ones we associate to probabilistic PCA and factor analysis. The crucial difference is that instead of one hyperplane, we have 2 d hyperplanes if h ∈ R d. Due to the sparsity prior, however, most of these flat Gaussians are unlikely: only the ones corresponding to a small Na (with only a few of the axes being active) are likely. For example, if we were to restrict ourselves to only those values of b for which Na = k, then one would have dk Gaussians. With this exponentially large number of Gaussians, the interesting thing to observe is that the sparse coding model only has a number of parameters linear in the number of dimensions of h. This property is shared with other distributed representation learning algorithms described in this chapter, such as the regularized auto-encoders.
17.3
The Entropy Bias from Maximum Likelihood
TODO: how the log-likelihood criterion forces a learner that is not able to generalize perfectly to yield an estimator that is much smoother than the target distribution. Phrase it in terms of entropy, not smoothness. 472
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
17.4
Manifold Learning via Regularized Auto-Encoders
Auto-encoders have been described in Section 15. What is their connection to manifold learning? This is what we discuss here. We denote f the encoder function, with h = f (x) the representation of x, and g the decoding function, with ˆ x = g(h) the reconstruction of x, although in some cases the encoder is a conditional distribution q(h | x) and the decoder is a conditional distribution P (x | h). What all auto-encoders have in common, when they are prevented from simply learning the identity function for all possible input x, is that training them involves a compromise between two “forces”: 1. Learning a representation h of training examples x such that x can be approximately recovered from h through a decoder. Note that this needs not be true for any x, only for those that are probable under the data generating distribution. 2. Some constraint or regularization is imposed, either on the code h or on the composition of the encoder/decoder, so as to make the transformed data somehow simpler or to prevent the auto-encoder from achieving perfect reconstruction everywhere. We can think of these constraints or regularization as a preference for solutions in which the representation is as simple as possible, e.g., factorized or as constant as possible, in as many directions as possible. In the case of the bottleneck auto-encoder a fixed number of representation dimensions is allowed, that is smaller than the dimension of x. In the case of sparse auto-encoders (Section 15.8) the representation elements h i are pushed towards 0. In the case of denoising auto-encoders (Section 15.9), the encoder/decoder function is encouraged to be contractive (have small derivatives). In the case of the contractive auto-encoder (Section 15.10), the encoder function alone is encouraged to be contractive, while the decoder function is tied (by symmetric weights) to the encoder function. In the case of the variational auto-encoder (Section 20.9.3), a prior log P (h) is imposed on h to make its distribution factorize and concentrate as much as possible. Note how in the limit, for all of these cases, the regularization prefers representations that are insensitive to the input. Clearly, the second type of force alone would not make any sense (as would any regularizer, in general). How can these two forces (reconstruction error on one hand, and “simplicity” of the representation on the other hand) be reconciled? The solution of the optimization problem is that only the variations that are needed to distinguish training examples need to be represented. If the data generating distribution concentrates near a low-dimensional manifold, this yields 473
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
Figure 17.12: A regularized auto-encoder or a bottleneck auto-encoder has to reconcile two forces: reconstruction error (which forces it to keep enough information to distinguish training examples from each other), and a regularizer or constraint that aims at reducing its representational ability, to make it as insensitive as possible to the input in as many directions as possible. The solution is for the learned representation to be sensitive to changes along the manifold (green arrow going to the right, tangent to the manifold) but invariant to changes orthogonal to the manifold (blue arrow going down). This yields to contraction of the representation in the directions orthogonal to the manifold.
representations that implicitly capture a local coordinate for this manifold: only the variations tangent to the manifold around x need to correspond to changes in h = f (x). Hence the encoder learns a mapping from the embedding space x to a representation space, a mapping that is only sensitive to changes along the manifold directions, but that is insensitive to changes orthogonal to the manifold. This idea is illustrated in Figure 17.12. A one-dimensional example is illustrated in Figure 17.13, showing that by making the auto-encoder contractive around the data points (and the reconstruction point towards the nearest data point), we recover the manifold structure (of a set of 0-dimensional manifolds in a 1-dimensional embedding space, in the figure).
17.5
Tangent Distance, Tangent-Prop, and Manifold Tangent Classifier
One of the early attempts to take advantage of the manifold hypothesis is the Tangent Distance algorithm (Simard et al., 1993, 1998). It is a non-parametric nearest-neighbor algorithm in which the metric used is not the generic Euclidean distance but one that is derived from knowledge of the manifolds near which probability concentrates. It is assumed that we are trying to classify examples 474
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
r(x)"
x 1"
x2"
x3"
x"
Figure 17.13: If the auto-encoder learns to be contractive around the data points, with the reconstruction pointing towards the nearest data points, it captures the manifold structure of the data. This is a 1-dimensional version of Figure 17.12. The denoising auto-encoder explicitly tries to make the derivative of the reconstruction function r(x) small around the data points. The contractive auto-encoder does the same thing for the encoder. Although the derivative of r(x) is asked to be small around the data points, it can be large between the data points (e.g. in the regions between manifolds), and it has to be large there so as to reconcile reconstruction error (r(x) ≈ x for data points x) and contraction (small derivatives of r(x) near data points).
and that examples on the same manifold share the same category. Since the classifier should be invariant to the local factors of variation that correspond to movement on the manifold, it would make sense to use as nearest-neighbor distance between points x1 and x 2 the distance between the manifolds M 1 and M2 to which they respectively belong. Although that may be computationally difficult (it would require an optimization, to find the nearest pair of points on M1 and M2 ), a cheap alternative that makes sense locally is to approximate Mi by its tangent plane at xi and measure the distance between the two tangents, or between a tangent plane and a point. That can be achieved by solving a lowdimensional linear system (in the dimension of the manifolds). Of course, this algorithm requires one to specify the tangent vectors at any point In a related spirit, the Tangent-Prop algorithm (Simard et al., 1992) proposes to train a neural net classifier with an extra penalty to make the output f(x) of the neural net locally invariant to known factors of variation. These factors of variation correspond to movement of the manifold near which examples of the ∂f (x) same class concentrate. Local invariance is achieved by requiring ∂x to be orthogonal to the known manifold tangent vectors v i at x, or equivalently that
475
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
the directional derivative of f at x in the directions vi be small: 2 X ∂f(x) ·vi . regularizer = λ ∂x
(17.1)
i
Like for tangent distance, the tangent vectors are derived a priori, e.g., from the formal knowledge of the effect of transformations such as translation, rotation, and scaling in images. Tanget-Prop has been used not just for supervised learning (Simard et al., 1992) but also in the context of reinforcement learning (Thrun, 1995).
∂f ∂x
∂h ∂x
Figure 17.14: Illustration of the main idea of the tangent-prop algorithm (Simard et al., 1992) and manifold tangent classifier (Rifai et al., 2011d), which both regularize the classifier output function f (x) (e.g. estimating conditional class probabilities given the input) so as to make it invariant to the local directions of variations ∂h (manifold tangent ∂x directions). This can be achieved by penalizing the magnitude of the dot product of ∂f all the rows of ∂h ∂x (the tangent directions) with all the rows of ∂x (the directions of sensitivity of each output to the input). In the case of the tangent-prop algorithm, the tangent directions are given a priori, whereas in the case of the manifold tangent classifier, they are learned, with h(x) being the learned representation of the input x. The figure illustrates two manifolds, one per class, and we see that the classifier output increases the most as we move from one manifold to the other, in input space.
A more recent paper introduces the Manifold Tangent Classifier (Rifai et al., 2011d), which eliminates the need to know the tangent vectors a priori, and 476
CHAPTER 17. THE MANIFOLD PERSPECTIVE ON REPRESENTATION LEARNING
instead uses a contractive auto-encoder to estimate them at any point. As we have seen in the previous section and Figure 15.9, auto-encoders in general, and contractive auto-encoders especially well, learn a representation h that is most sensitive to the factors of variation present in the data x, so that the leading singular vectors of ∂h ∂x correspond to the estimated tangent vectors. As illustrated in Figure 15.10, these estimated tangent vectors go beyond the classical invariants that arise out of the geometry of images (such as translation, rotation and scaling) and include factors that must be learned because they are object-specific (such as adding or moving body parts). The algorithm proposed with the manifold tangent classifier is therefore simple: (1) use a regularized auto-encoder such as the contractive auto-encoder to learn the manifold structure by unsupervised learning (2) use these tangents to regularize a neural net classifier as in Tangent Prop (Eq. 17.1). TODO Tangent Prop or Tangent-Prop?
477
Chapter 18
Confronting the Partition Function TODO– make sure the book explains asymptotic consistency somewhere, add links to it here In Section 13.2.2 we saw that many probabilistic models (commonly known as undirected graphical models) are defined by an unnormalized probability distribution p˜(x; θ) or energy function (Section 13.2.4) E(x) = − log p˜(x).
(18.1)
Because the analytic formulation of the model is via this energy function or unnormalized probability, the complete formulation of the probability function or probability density requires a normalization constant called partition function Z(θ) such that 1 p(x; θ) = p˜(x; θ) Z(θ) is a valid, normalized probability distribution. The partition function is an integral or sum over the unnormalized probability of all states. This operation is intractable for many interesting models. As we will see in chapter 20, many deep learning models are designed to have a tractable normalizing constant, or are designed to be used in ways that do not involve computing p(x) at all. However, other models directly confront the challenge of intractable partition functions. In this chapter, we describe techniques used for training and evaluating models that have intractable partition functions.
478
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
18.1
The Log-Likelihood Gradient of Energy-Based Models
What makes learning by maximum likelihood particularly difficult is that the partition function depends on the parameters, so that the log-likelihood gradient has a term corresponding to the gradient of the partition function: ∂ log p(x; θ) ∂E(x) log Z(θ) =− − . ∂θ ∂θ ∂θ
(18.2)
In the case where the energy function is analytically tractable (e.g., RBMs), the difficult part is estimating the the gradient of the partition function. Unsurprisingly, since computing Z itself is intractable, we find that computing its gradient is also intractable, but the good news is that it corresponds to an expectation over the model distribution, which can be estimated by Monte-Carlo methods. Though the gradient of the log partition function is intractable to evaluate accurately, it is straightforward to analyze algebraically. The derivatives we need for learning are of the form ∂ log p(x) ∂θ where θ is one of the parameters of p(x). These derivatives are given simply by ∂ ∂ log p(x) = (log p˜(x) − log Z) . ∂θ ∂θ In this chapter, we are primarily concerned with the estimation of the term on the right: ∂ log Z ∂θ = = =
∂ ∂θ
∂ Z ∂θ
Z P
˜(x) xp
Z
∂ x ∂θ
P
p˜(x)
. Z For models that guarantee p(x) > 0 for all x, we can substitute exp (log p(x)) ˜ for p˜(x): P ∂ exp (log p(x)) ˜ = x ∂θ Z =
x
P
∂ exp (log p(x)) ˜ ˜ ∂θ log p(x) Z
479
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
= =
∂ ˜(x) ∂θ xp
P
X x
log p(x) ˜
Z ∂ p(x) log p(x) ˜ ∂θ
∂ log p(x). ˜ ∂θ This derivation made use of summation over discrete x, but a similar result applies using integration over continuous x. In the continuous version of the derivation, we use Leibniz’s rule for differentiation under the integral sign to obtain the identity Z Z ∂ ∂ p˜(x)dx = p˜(x)dx. ∂θ ∂θ This identity is only applicable under certain regularity conditions on p˜ and ∂ ˜(x)1 . Fortunately, most machine learning models of interest have these prop∂θ p erties. This identity ∂ ∂ log Z = Ex∼p(x) log p(x) ˜ (18.3) ∂θ ∂θ is the basis for a variety of Monte Carlo methods for approximately maximizing the likelihood of models with intractable partition functions. Putting this result together with Eq. 18.2, we obtain the following well-known decomposition of the gradient in terms of the gradient of the energy function on the observed x and in average over the model distribution: = Ex∼p(x)
∂E(x) ∂ ∂ − log p(x; θ) = − Ex∼p(x) E(x). ∂θ ∂θ ∂θ
(18.4)
The first term is called the positive phase contribution to the gradient and it corresponds to pushing the energy down on the “positive” examples and reinforcing the interactions that are observed between random variables when x is observed, while the second term is called the negative phase contribution to the gradient and it corresponds to pushing the energy up everywhere else, with proportionally more push where the model currently puts more probability mass. When a minimum of the negative log-likelihood is found, the two terms must of course cancel each other, and the only thing that prevents the model from putting probability mass in exactly the same way as the training distribution is that it may be regularized or have some constraints, e.g. be parametric. 1
In measure theoretic terms, the conditions are: (i) p˜ must be a Lebesgue-integrable function ∂ of x for every value of θ; (ii) ∂θ p˜(x) must exist for all θ and almost all x; (iii) There exists ∂ an integrable function R(x) that bounds ∂θ p˜(x) (i.e. such that | ∂∂θ p˜(x)| ≤ R(x) for all θ and almost all x). 480
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
18.2
Stochastic Maximum Likelihood and Contrastive Divergence
The naive way of implementing equation 18.3 is to compute it by burning in a set of Markov chains from a random initialization every time the gradient is needed. When learning is performed using stochastic gradient descent, this means the chains must be burned in once per gradient step. This approach leads to the training procedure presented in Algorithm 18.1. The high cost of burning in the Markov chains in the inner loop makes this procedure computationally infeasible, but this procedure is the starting point that other more practical algorithms aim to approximate. Algorithm 18.1 A naive MCMC algorithm for maximizing the log likelihood with an intractable partition function using gradient ascent. Set , the step size, to a small positive number Set k, the number of Gibbs steps, high enough to allow burn in. Perhaps 100 to train an RBM on a small image patch. while Not converged do Sample a minibatch of m examples {x(1) , . . . , x (m)} from the training set. P ˜ (i) ; θ) g ← 1m m i=1 ∇ θ log p(x Initialize a set of m samples { ˜ x(1), . . . , x˜(m) } to random values (e.g., from a uniform or normal distribution, or possibly a distribution with marginals matched to the model’s marginals) for i = 1 to k do for j = 1 to m do x(j) ← gibbs update(˜ ˜ x(j) ) end for end for P ˜ ˜ x(i) ; θ) g ← g − m1 m i=1 ∇θ log p( θ ← θ + g end while We can view the MCMC approach to maximum likelihood as trying to achieve balance between two forces, one pushing up on the model distribution where the data occurs, and another pushing down on the model distribution where the model samples occur. Fig. 18.1 illustrates this process. The two forces correspond to maximizing log p˜ and minimizing log Z. In this chapter, we assume the positive phase is tractable and may be performed exactly, but other chapters, especially chapter 19 deal with intractable positive phases. In this chapter, we present several approximations to the negative phase. Each of these approximations can 481
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Figure 18.1: The view of Algorithm 18.1 as having a “positive phase” and “negative phase”. Left) In the positive phase, we sample points from the data distribution, and push up on their unnormalized probability. This means points that are likely in the data get pushed up on more. Right) In the negative phase, we sample points from the model distribution, and push down on their unnormalized probability. This counteracts the positive phase’s tendency to just add a large constant to the unnormalized probability everywhere. When the data distribution and the model distribution are equal, the positive phase has the same chance to push up at a point as the negative phase has to push down. At this point, there is no longer any gradient (in expectation) and training must terminate.
be understood as making the negative phase computationally cheaper but also making it push down in the wrong locations. Because the negative phase involves drawing samples from the model’s distribution, we can think of it as finding points that the model believes in strongly. Because the negative phase acts to reduce the probability of those points, they are generally considered to represent the model’s incorrect beliefs about the world. They are frequently referred to in the literature as “hallucinations” or “fantasy particles.” In fact, the negative phase has been proposed as a possible explanation for dreaming in humans and other animals (Crick and Mitchison, 1983), the idea being that the brain maintains a probabilistic model of the world and follows the gradient of log p˜ while experiencing real events while awake and follows the negative gradient of log p˜ to minimize log Z while sleeping and experiencing events sampled from the current model. This view explains much of the language used to describe algorithms with a positive and negative phase, but it has not been proven to be correct with neuroscientific experiments. In machine learning models, it is usually necessary to use the positive and negative phase simultaneously, rather than in separate time periods of wakefulness and REM sleep. As we will see in chapter 19.6, other machine learning algorithms draw samples from the model distribution for other purposes and such algorithms could also provide an account for the function of dream sleep. 482
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Given this understanding of the role of the positive and negative phase of learning, we can attempt to design a less expensive alternative to Algorithm 18.1. The main cost of the naive MCMC algorithm is the cost of burning in the Markov chains from a random initialization at each step. A natural solution is to initialize the Markov chains from a distribution that is very close to the model distribution, so that the burn in operation does not take as many steps. The contrastive divergence (CD, or CD-k to indicate CD with k Gibbs steps) algorithm initializes the Markov chain at each step with samples from the data distribution (Hinton, 2000). This approach is presented as Algorithm 18.2. Obtaining samples from the data distribution is free, because they are already available in the data set. Initially, the data distribution is not close to the model distribution, so the negative phase is not very accurate. Fortunately, the positive phase can still accurately increase the model’s probability of the data. After the positive phase has had some time to act, the model distribution is closer to the data distribution, and the negative phase starts to become accurate. Algorithm 18.2 The contrastive divergence algorithm, using gradient ascent as the optimization procedure. Set , the step size, to a small positive number Set k, the number of Gibbs steps, high enough to allow a Markov chain of p(x; θ) to mix when initializedfrom pdata. Perhaps 1-20 to train an RBM on a small image patch. while Not converged do Sample P a minibatch of m examples {x(1) , . . . , x (m)} from the training set. m ˜ (i) ; θ) g ← 1m i=1 ∇ θ log p(x for i = 1 to m do x ˜ (i) ← x(i) end for for i = 1 to k do for j = 1 to m do x(j) ← gibbs update(˜ ˜ x(j) ) end for end for Pm g ← g − m1 i=1 ∇θ log p( ˜ ˜ x(i) ; θ) θ ← θ + g end while Of course, CD is still an approximation to the correct negative phase. The main way that CD qualitatively fails to implement the correct negative phase is that it fails to suppress “spurious modes” — regions of high probability that are far from actual training examples. Fig. 18.2 illustrates why this happens. 483
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Essentially, it is because modes in the model distribution that are far from the data distribution will not be visited by Markov chains initialized at training points, unless k is very large. Carreira-Perpi˜nan and Hinton (2005) showed experimentally that the CD estimator is biased for RBMs and fully visible Boltzmann machines, in that it converges to different points than the maximum likelihood estimator. They argue that because the bias is small, CD could be used as an inexpensive way to initialize a model that could later be fine-tuned via more expensive MCMC methods. Bengio and Delalleau (2009) showed that CD can be interpreted as discarding the smallest terms of the correct MCMC update gradient, which explains the bias. CD is useful for training shallow models like RBMs. These can in turn be stacked to initialize deeper models like DBNs or DBMs. However, CD does not provide much help for training deeper models directly. This is because it is difficult to obtain samples of the hidden units given samples of the visible units. Since the hidden units are not included in the data, initializing from training points cannot solve the problem. Even if we initialize the visible units from the data, we will still need to burn in a Markov chain sampling from the distribution over the hidden units conditioned on those visible samples. Most of the approximate inference techniques described in chapter 19 for approximately marginalizing out the hidden units cannot be used to solve this problem. This is because all of the approximate marginalization methods based on giving a lower bound on p˜ would give a lower bound on log Z. We need to minimize log Z, and minimizing a lower bound is not a useful operation. The CD algorithm can be thought of as penalizing the model for having a Markov chain that changes the input rapidly when the input comes from the data. This means training with CD somewhat resembles autoencoder training. Even though CD is more biased than some of the other training methods, it can be useful for pre-training shallow models that will later be stacked. This is because the earliest models in the stack are encouraged to copy more information up to their latent variables, thereby making it available to the later models. This should be thought of more of as an often-exploitable side effect of CD training rather than a principled design advantage. Sutskever and Tieleman (2010) showed that the CD update direction is not the gradient of any function. This allows for situations where CD could cycle forever, but in practice this is not a serious problem. A different strategy that resolves many of the problems with CD is to initialize the Markov chains at each gradient step with their states from the previous gradient step. This approach was first discovered under the name stochastic maximum likelihood (SML) in the applied mathematics and statistics community (Younes, 1998) and later independently rediscovered under the name persistent contrastive 484
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Figure 18.2: An illustration of how the negative phase of contrastive divergence (Algorithm 18.2) can fail to suppress spurious modes. A spurious mode is a mode that is present in the model distribution but absent in the data distribution. Because contrastive divergence initializes its Markov chains from data points and runs the Markov chain for only a few steps, it is unlikely to visit modes in the model that are far from the data points. This means that when sampling from the model, we will sometimes get samples that do not resemble the data. It also means that due to wasting some of its probability mass on these modes, the model will struggle to place high probability mass on the correct modes. Note that this figure uses a somewhat simplified concept of distance–the spurious mode is far from the correct mode along the number line in R. This corresponds to a Markov chain based on making local moves with a single x variable in R. For most deep probabilistic models, the Markov chains are based on Gibbs sampling and can make nonlocal moves of individual variables but cannot move all of the variables simultaneously. For these problems, it is usually better to consider the edit distance between modes, rather than the Euclidean distance. However, edit distance in a high dimensional space is difficult to depict in a 2-D plot.
485
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
divergence (PCD, or PCD-k to indicate the use of k Gibbs steps per update) in the deep learning community (Tieleman, 2008). See Algorithm 18.3. The basic idea of this approach is that, so long as the steps taken by the stochastic gradient algorithm are small, then the model from the previous step will be similar to the model from the current step. It follows that the samples from the previous model’s distribution will be very close to being fair samples from the current model’s distribution, so a Markov chain initialized with these samples will not require much time to mix. Because each Markov chain is continually updated throughout the learning process, rather than restarted at each gradient step, the chains are free to wander far enough to find all of the model’s modes. SML is thus considerably more resistant to forming models with spurious modes than CD is. Moreover, because it is possible to store the state of all of the sampled variables, whether visible or latent, SML provides an initialization point for both the hidden and visible units. CD is only able to provide an initialization for the visible units, and therefore requires burn-in for deep models. SML is able to train deep models efficiently. Marlin et al. (2010) compared SML to many of the other criteria presented in this chapter. They found that SML results in the best test set log likelihood for an RBM, and if the RBM’s hidden units are used as features for an SVM classifier, SML results in the best classification accuracy. SML is vulnerable to becoming inaccurate if k is too small or is too large — in other words, if the stochastic gradient algorithm can move the model faster than the Markov chain can mix between steps. There is no known way to test formally whether the chain is successfully mixing between steps. Subjectively, if the learning rate is too high for the number of Gibbs steps, the human operator will be able to observe that there is much more variance in the negative phase samples across gradient steps rather than across different Markov chains. For example, a model trained on MNIST might sample exclusively 7s on one step. The learning process will then push down strongly on the mode corresponding to 7s, and the model might sample exclusively 9s on the next step. Care must be taken when evaluating the samples from a model trained with SML. It is necessary to draw the samples starting from a fresh Markov chain initialized from a random starting point after the model is done training. The samples present in the persistent negative chains used for training have been influenced by several recent versions of the model, and thus can make the model appear to have greater capacity than it actually does. Berglund and Raiko (2013) performed experiments to examine the bias and variance in the estimate of the gradient provided by CD and SML. CD proves to have low variance than the estimator based on exact sampling. SML has higher variance. The cause of CD’s low variance is its use of the same training points 486
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Algorithm 18.3 The stochastic maximum likelihood / persistent contrastive divergence algorithm using gradient ascent as the optimization procedure. Set , the step size, to a small positive number Set k, the number of Gibbs steps, high enough to allow a Markov chain of p(x; θ + g) toburn in, starting from samples from p(x; θ). Perhaps 1 for RBM on a small image patch, or 5-50 for a morecomplicated model like a DBM. Initialize a set of m samples {˜ x(1) , . . . , x˜(m) } to random values (e.g., from a uniform or normal distribution, or possibly a distribution with marginals matched to the model’s marginals) while Not converged do Sample a minibatch of m examples {x(1) , . . . , x (m)} from the training set. P ˜ (i) ; θ) g ← 1m m i=1 ∇ θ log p(x for i = 1 to k do for j = 1 to m do x(j) ← gibbs update(˜ ˜ x(j) ) end for end for P g ← g − m1 m ˜ ˜ x(i) ; θ) i=1 ∇θ log p( θ ← θ + g end while in both the positive and negative phase. If the negative phase is initialized from different training points, the variance rises above that of the estimator based on exact sampling. TODO– FPCD? TODO– Rates-FPCD? TODO– mention that all these things can be coupled with enhanced samplers, which I believe are mentioned in the intro to graphical models chapter One key benefit to the MCMC-based methods described in this section is that they provide an estimate of the gradient of log Z, and thus we can essentially decompose the problem into the log p˜ contribution and the log Z contribution. We can then use any other method to tackle log p(x), ˜ and just add our negative phase gradient onto the other method’s gradient. In particular, this means that our positive phase can make use of methods that provide only a lower bound on p˜. Most of the other methods of dealing with log Z presented in this chapter are incompatible with bound-based positive phase methods.
487
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
18.3
Pseudolikelihood
Monte Carlo approximations to the partition function and its gradient directly confront the partition function. Other approaches sidestep the issue, by training the model without computing the partition function. Most of these approaches are based on the observation that it is easy to compute ratios of probabilities in an unnormalized probabilistic model. This is because the partition function appears in both the numerator and the denominator of the ratio and cancels out: p(x) = p(y)
1 ˜(x) Zp 1p Z ˜(y)
=
p˜(x) . p˜(y)
The pseudolikelihood is based on the observation that conditional probabilities take this ratio-based form, and thus can be computed without knowledge of the partition function. Suppose that we partition x into a, b, and c, where a contains the variables we want to find the conditional distribution over, b contains the variables we want to condition on, and c contains the variables that are not part of our query. p(a | b) =
p(a, p(b) p(a, b) p˜(a, b) = P = P . p(b) p(a, b, c) p˜(a, b, c) a,c a,c
This quantity requires marginalizing out a, which can be a very efficient operation provided that a and c do not contain very many variables. In the extreme case, a can be a single variable and c can be empty, making this operation require only as many evaluations of p˜ as there are values of a single random variable. Unfortunately, in order to compute the log likelihood, we need to marginalize out large sets of variables. If there are n variables total, we must marginalize a set of size n − 1. By the chain rule of probability, log p(x) = log p(x1) + log p(x 2 | x 1 ) + · · · + p(xn | x1:n−1 ). In this case, we have made a maximally small, but c can be as large as x2:n. What if we simply move c into b to reduce the computational cost? This yields the pseudolikelihood (Besag, 1975) objective function: n X i=1
log p(xi | x −i ).
If each random variable has k different values, this requires only k × n evaluations of p˜ to compute, as opposed to the k n evaluations needed to compute the partition function. 488
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
This may look like an unprincipled hack, but it can be proven that estimation by maximizing the log pseudolikelihood is asymptotically consistent (Mase, 1995). Of course, in the case of datasets that do not approach the large sample limit, pseudolikelihood may display different behavior from the maximum likelihood estimator. It is possible to trade computational complexity for deviation from maximum likelihood behavior by using the generalized pseudolikelihood estimator (Huang and Ogata, 2002). The generalized pseudolikelihood estimator uses m different sets S (i) , i = 1, . . . , m of indices variables that appear together on the left side of the conditioning bar. In the extreme case of m = 1 and S (1) = 1, . . . , n the generalized pseudolikelihood recovers the log likelihood. In the extreme case of m = n and S(i) = {i}, the generalized pseudolikelihood recovers the pseudolikelihood. The generalized pseudolikelihood objective function is given by m X i=1
log p(xS (i) | x −S(i) ).
The performance of pseudolikelihood-based approaches depends largely on how the model will be used. Pseudolikelihood tends to perform poorly on tasks that require a good model of the full joint p(x), such as density estimation and sampling. However, it can perform better than maximum likelihood for tasks that require only the conditional distributions used during training, such as filling in small amounts of missing values. Generalized pseudolikelihood techniques are especially powerful if the data has regular structure that allows the S index sets to be designed to capture the most important correlations while leaving out groups of variables that only have negligible correlation. For example, in natural images, pixels that are widely separated in space also have weak correlation, so the generalized pseudolikelihood can be applied with each S set being a small, spatially localized window. One weakness of the pseudolikelihood estimator is that it cannot be used with other approximations that provide only a lower bound on p(x), ˜ such as variational inference, which will be covered in chapter 19.4. This is because p˜ appears in the denominator. A lower bound on the denominator provides only an upper bound on the expression as a whole, and there is no benefit to maximizing an upper bound. This makes it difficult to apply pseudolikelihood approaches to deep models such as deep Boltzmann machines, since variational methods are one of the dominant approaches to approximately marginalizing out the many layers of hidden variables that interact with each other. However, pseudolikelihood is still useful for deep learning, because it can be used to train single layer models, or deep models using approximate inference methods that are not based on lower bounds. 489
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
Pseudolikelihood has a much greater cost per gradient step than SML, due its explicit computation of all of the conditionals. However, generalized pseudolikelihood and similar criteria can still perform well if only one randomly selected conditional is computed per example (Goodfellow et al., 2013b), thereby bringing the computational cost down to match that of SML. Though the pseudolikelihood estimator does not explicitly minimize log Z, it can still be thought of as having something resembling a negative phase. The denominators of each conditional distribution result in the learning algorithm suppressing the probability of all states that have only one variable differing from a training example.
18.4
Score Matching and Ratio Matching
Score matching (Hyv¨arinen, 2005b) provides another consistent means of training a model without estimating Z or its derivatives. The strategy used by score matching is to minimize the expected squared difference between the derivatives of the model’s log pdf with respect to the input and the derivatives of the data’s log pdf with respect to the input: θ ∗ = min J(θ) = θ
1 E ||∇ log pmodel (x; θ) − ∇x log pdata (x)||22 . 2 x x
Because the ∇x Z = 0, this objective function avoids the difficulties associated with differentiating the partition function. However, it appears to have another difficult: it requires knowledge of the true distribution generating the training data, pdata . Fortunately, minimizing J (θ) turns out to be equivalent to minimizing 2 ! m X n 2 X 1 ∂ 1 ∂ ˜ J(θ) = log pmodel (x; θ) + log pmodel (x; θ) m ∂x2j 2 ∂xi i=1 j=1
where {x (1) , . . . , x(m) } is the training set and n is the dimensionality of x. Because score matching requires taking derivatives with respect to x, it is not applicable to models of discrete data. However, the latent variables in the model may be discrete. Like the pseudolikelihood, score matching only works when we are able to evaluate log p(x) ˜ and its derivatives directly. It is not compatible with methods that only provide a lower bound on log p˜(x), because we are not able to conclude anything about the relationship between the derivatives and second derivatives of the lower bound, and the relationship of the true derivatives and second derivatives needed for score matching. This means that score matching cannot be applied to estimating models with complicated interactions between the hidden units, such 490
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
as sparse coding models or deep Boltzmann machines. Score matching can be used to pretrain the first hidden layer of a larger model. Score matching has not been applied as a pretraining strategy for the deeper layers of a larger model, because the hidden layers of such models usually contain some discrete variables. While score matching does not explicitly have a negative phase, it can be viewed as a version of contrastive divergence using a specific kind of Markov chain (Hyv¨arinen, 2007a). The Markov chain in this case is not Gibbs sampling, but rather a different approach that makes local moves guided by the gradient. Score matching is equivalent to CD with this type of Markov chain when the size of the local moves approaches zero. Lyu (2009) generalized score matching to the discrete case (but made an error in their derivation that was corrected by Marlin et al. (2010)). Marlin et al. (2010) found that generalized score matching (GSM) does not work in high dimensional discrete spaces where the observed probability of many events is 0. A more successful approach to extending the basic ideas of score matching to discrete data is ratio matching (Hyv¨arinen, 2007b). Ratio matching applies specifically to binary data. Ratio matching consists of minimizing the following objective function:
J(RM)(θ) =
1 m
m X n X i=1 j=1
1 1+
pmodel (x(i) ;θ) pmodel (f (x(i) ,j);θ)
2
where f (x, j) return x with the bit at position j flipped. Ratio matching avoids the partition function using the same trick as the pseudolikelihood estimator: in a ratio of two probabilities, the partition function cancels out. Marlin et al. (2010) found that ratio matching outperforms SML, pseudolikelihood, and GSM in terms of the ability of models trained with ratio matching to denoise test set images. Like the pseudolikelihood estimator, ratio matching requires n evaluations of p˜ per data point, making its computational cost per update roughly n times higher than that of SML. Like the pseudolikelihood estimator, ratio matching can be thought of as pushing down on all fantasy states that have only one variable different from a training example. Since ratio matching applies specifically to binary data, this means that it acts on all fantasy states within Hamming distance 1 of the data. Ratio matching can also be useful as the basis for dealing with high-dimensional sparse data, such as word count vectors. This kind of data poses a challenge for MCMC-based methods because the data is extremely expensive to represent in dense format, yet the MCMC sampler does not yield sparse values until the model 491
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
has learned to represent the sparsity in the data distribution. Dauphin and Bengio (2013) overcame this issue by designing an unbiased stochastic approximation to ratio matching. The approximation evaluates only a randomly selected subset of the terms of the objective, and does not require the model to generate complete fantasy samples.
18.5
Denoising Score Matching
In some cases we may wish to regularize score matching, by fitting a distribution Z p smoothed(x) = p data(x + y)q(y | x)dy rather than the true p data. This is especially useful because in practice we usually do not have access to the true pdata but rather only an empirical distribution defined by samples from it. Any consistent estimator will, given enough capacity, make pmodel into a set of Dirac distributions centered on the training points. Smoothing by q helps to reduce this problem, at the loss of the asymptotic consistency property. Kingma and LeCun (2010b) introduced a procedure for performing regularized score matching with the smoothing distribution q being normally distributed noise. Surprisingly, some denoising autoencoder training algorithms correspond to training energy-based models with denoising score matching (Vincent, 2011b). The denoising autoencoder variant of the algorithm is significantly less computationally expensive than score matching. Swersky et al. (2011) showed how to derive the denoising autoencoder for any energy-based model of real data. This approach is known as denoising score matching (SMD).
18.6
Noise-Contrastive Estimation
Most techniques for estimating models with intractable partition functions do not provide an estimate of the partition function. SML and CD estimate only the gradient of the log partition function, rather than the partition function itself. Score matching and pseudolikelihood avoid computing quantities related to the partition function altogether. Noise-contrastive estimation (NCE) (Gutmann and Hyvarinen, 2010) takes a different strategy. In this approach, the probability distribution by the model is represented explicitly as log p model(x) = log p˜model (x; θ) + c, 492
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
where c is explicitly introduced as an approximation of − log Z(θ). Rather than estimating only θ, the noise contrastive estimation procedure treats c as just another parameter and estimates θ and c simultaneously, using the same algorithm for both. The resulting thus may not correspond exactly to a valid probability distribution, but will become closer and closer to being valid as the estimate of c improves.2 Such an approach would not be possible using maximum likelihood as the criterion for the estimator. The maximum likelihood criterion would choose to set c arbitrarily high, rather than setting c to create a valid probability distribution. NCE works by reducing the unsupervised learning problem of estimating p(x) to a supervised learning problem. This supervised learning problem is constructed in such a way that maximum likelihood estimation in this supervised learning problem defines an asymptotically consistent estimator of the original problem. Specifically, we introduce a second distribution, the noise distribution p noise(x). The noise distribution should be tractable to evaluate and to sample from. We can now construct a model over both x and a new, binary class variable y. In the new joint model, we specify that 1 p joint model (y = 1) = , 2 pjoint model (x | y = 1) = pmodel (x), and
p joint model (x | y = 0) = p noise(x).
In other words, y is a switch variable that determines whether we will generate x from the model or from the noise distribution. We can construct a similar joint model of training data. In this case, the switch variable determines whether we draw x from the data or from the noise distribution. Formally, ptrain (y = 1) = 12 , p train (x | y = 1) = pdata (x), and ptrain (x | y = 0) = pnoise (x). We can now just use standard maximum likelihood learning on the supervised learning problem of fitting p joint model to ptrain : θ, c = arg max E x,y∼ptrain log pjoint model (y | x). θ,c
It turns out that p joint model is essentially a logistic regression model applied to the difference in log probabilities of the model and the noise distribution: p joint model(y = 1 | x) = 2
pmodel (x) p model (x) + pnoise (x)
NCE is also applicable to problems with a tractable partition function, where there is no need to introduce the extra parameter c. However, it has generated the most interest as a means of estimating models with difficult partition functions. 493
CHAPTER 18. CONFRONTING THE PARTITION FUNCTION
=
1 1+
p noise(x) pmodel(x)
1 (x) 1 + exp log pp noise model(x) pnoise(x) = σ − log pmodel (x)
=
= σ (log pmodel (x) − log p noise(x)) .
NCE is thus simple to apply so long as log p˜model is easy to backpropagate through, and, as specified above, noise is easy to evaluate (in order to evaluate pjoint model ) and sample from (in order to generate the training data). NCE is most successful when applied to problems with few random variables, but can work well even if those random variables can take on a high number of values. For example, it has been successfully applied to modeling the conditional distribution over a word given the context of the word (Mnih and Kavukcuoglu, 2013). Though the word may be drawn from a large vocabulary, there is only one word. When NCE is applied to problems with many random variables, it becomes less efficient. The logistic regression classifier can reject a noise sample by identifying any one variable whose value is unlikely. This means that learning slows down greatly after pmodel has learned the basic marginal statistics. Imagine learning a model of images of faces, using unstructured Gaussian noise as pnoise . If pmodel learns about eyes, it can reject almost all unstructured noise samples without having learned anything other facial features, such as mouths. The constraint that pnoise must be easy to evaluate and easy to sample from can be overly restrictive. When p | 224,381 | 942,579 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.765625 | 3 | CC-MAIN-2024-30 | latest | en | 0.162239 |
realsciencechallenge.com | 1,516,411,444,000,000,000 | text/html | crawl-data/CC-MAIN-2018-05/segments/1516084888341.28/warc/CC-MAIN-20180120004001-20180120024001-00477.warc.gz | 274,681,259 | 15,963 | # #23 – What’s Interpolation? Our 5-minute Crash Course on Graph Analysis
Imagine looking at your watch but not knowing how to read the time. Or, looking at a newspaper headline but not understanding what it’s saying. Both are important skills to help you function in the everyday. Without either one, doing everyday work gets a little harder. Knowing how to read a graph in science class is no different. Graph analysis is an important skill and, without it, learning science gets a little harder.
Unfortunately, students struggle with graph analysis and, specifically, interpolation. Recent results from REAL Science Challenge Vol 2 Contest 1 support this claim. In questions where students need to practice interpolation (ie. Finding a value for y given a value of x and vice versa), only x% of students provide the correct answer. That means x students in a class of 30 struggle in graph analysis, in finding values from a graph.
In this post, we provide a quick overview and some examples on how to examine a graph and get some values through interpolation. At the end of our post, we have a cheat sheet available for download.
## Why is graph analysis important?
Graph analysis is really about finding relevant information from a graph to solve a problem. Students need to know what information to extract from a graph before analysis can occur.
## I. Basic Line Graph Analysis
Consider the following line graph:
BEFORE STARTING: Check the axes and their values.
### If given a value that is plot along x-axis:
1. Find given value along x axis.
2. From this point, trace a straight line vertically (parallel to the y-axis) until it intersects with the line graph.
3. Then, trace a line horizontally (parallel to the x axis) from the intersect to the y-axis.
4. The value of y corresponding to the given x value is where the traced line intercepts with the y-axis.
For example, consider we want to determine the cost of installing a fence that is 17 feet in length.
Through graph interpolation, we can estimate that the cost would be roughly \$410.
### If given a value plot along the y axis:
1. Find given value along y axis.
2. From this point, trace a straight line horizontally (parallel to the x-axis) until it intersects with the line graph.
3. Then, trace a line vertically (parallel to the y axis) from the intersect to the x-axis.
4. The value of x corresponding to given y value is where the traced line intercepts with the x axis.
For example, consider we want to determine what length fence we can install for \$275.
Through graph interpolation, we can determine that the fence would be 8 feet in length.
## II. Basic Bar Graph Analysis
Consider the following bar graph.
The steps for bar graph analysis are similar to those for a line graph. However, since individual bars on a bar graph represent the range of possible values for a given x or y condition, a bar can potentially intersect with a range of x or y conditions. Thus, interpolating bar graphs can produce multiple results (unlike most line graphs that typically produce a single result).
For example, let’s say we want to determine what fiction books had \$60 million of gross earnings.
1. We find given value along y axis. From this point, trace a straight line horizontally (parallel to the x-axis) until it intersects with the bar graphs.
2. Then, trace line(s) vertically (parallel to the y axis) from the intersect(s) to the x-axis.
Through graph interpolation, we find multiple values that match the original query (romance novels from 2006-2010 and mystery novels from 2006-2007).
## Wrap Up
Learning to read a graph is an important skill every student needs to learn how to do properly. Being able to extract data is the first step towards analyzing data, and teachers need to teach it explicitly. And, students need to practice the skill too (REAL Science examples to follow in a future post). Click the link below to download our REAL Science – Interpolation Cheat Sheet.
Until next time, keep it REAL.
Posted on January 9, 2018 in Strategies | 860 | 4,058 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.78125 | 5 | CC-MAIN-2018-05 | longest | en | 0.900725 |
https://fr.scribd.com/doc/78704379/MC0074-Statistical-and-Numerical-methods-using-C | 1,579,742,626,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579250608062.57/warc/CC-MAIN-20200123011418-20200123040418-00452.warc.gz | 461,417,919 | 64,430 | Vous êtes sur la page 1sur 14
August 2011 Master of Computer Application (MCA) Semester 3 MC0074 Statistical and Numerical methods using C++
Submitted by Ravish R Roll No.511122302 Course- MCA Centre Code- 2759
Assignment Set 1
1. A box contains 74 brass washers, 86 steel washers and 40 aluminum washers, Three washers are drawn at random from the box without replacement. Determine the probability that all three are steel washers. Total number of washers in the box=74+86+40=200 The number of elements in the sample space S=n(S)=the number of ways in which three washers are drawn together at random out of the these 200 washers 200 C3=200x199x198/3x2x1 Let E be the event of drawing one brass, one steel, one aluminum washer, then the number of element in E=n(E)=the number of ways in which 3 steel washer can be drawn out of 86 steel washers =86x85x84/3x2x1 The required probability= P(E)=n(E)/n(S)=86x85x84/200x199x198=0.07792
2. Discuss and define the Correlation coefficient with the suitable example.
Correlation is one of the most widely used statistical techniques. Whenever two variable are so related that a change in one variable result in a direct or inverse change in the other and also greater the magnitude of change in one variable corresponds to greater the magnitude of change in the other, then the variable are said to be correlated or the relationship between the variables is known as correlation. We have been concerned with associating parameters such as E(x) and V(X) with the distribution of one-dimensional random variable. If we have a two-dimensional random variable (X,Y), an analogous problem is encountered. Definition Let (X, Y) be a two-dimensional random variable. We define between X and Y, as follows:
xy,
the correlation coefficient,
xy
The numerator of , is called the covariance of X and Y. Example Suppose that the two-dimensional random variable (X, Y) is uniformly distributed over the triangular region
R = {(x, y) | 0 < x < y < 1} The pdf is given as f(x, y) = 2, (x, y) R, = 0, elsewhere. Thus the marginal pdfs of X and of Y are
g(x) = 2 (1 x), 0 x 1
h(y) = = 2y, 0 y 1
Therefore
E(X) =
, E(Y) =
E(X2) =
, E(Y2) =
V(Y) = E(Y2) (E(Y))2 =
E(XY) = Hence
xy
3. If x is normally distributed with zero mean and unit variance, find the expectation and variance of W 2 .
,the above equation reduced to
Expectation of x2 i.e.
(i)
Integrating
by
parts
taking
as
first
function
and
remembering
that
Putting
Hence
=1
(ii)
=3
=3(1)
with the help of (ii)
Variance of x2 = = 3-(1)2 =2
4. The sales in a particular department store for the last five years is given in the following table Years Sales (in lakhs) Estimate the sales for the year 1979. Newtons backward difference table is 1974 40 1976 43 1978 48 1980 52 1982 57
We have
p= yn = 5, 2yn = 1, 3yn = 2, 4yn = 5 Newtons interpolati0on formula gives y1979 = 57 + (-1.5) 5 + = 57 7.5 + 0.375 + 0.125 + 0.1172 y1979 = 50.1172
5. Find out the geometric mean of the following series Class 0-10 10-20 20-30 Frequency 17 10 11
30-40 15
40-50 8
Here we have Class 0-10 10-20 20-30 30-40 40-50 Frequency(f) 17 10 11 15 8 N=61 Mid value(x) 5 15 25 35 45 Log x .6990 1.1761 1.3979 1.5441 1.6532 f.(Logx) 11.883 11.761 15.3769 23.1615 13.2256 Sum=75.408
If F be the required geometric mean, then Log G = = 1/61(75.408) = 1.236197 G = antilog 1.23 = 16.28 6. Find the equation of regression line of x on y from the following data x y 0 10 1 12 2 27 3 10 4 30
sum(X) = 0+1+2+3+4 = 10 sum(X) = 0+1+2+3+4 = 30 sum(Y) = 10+12+27+10+30 = 89 sum(Y) = 10+12+27+10+30 = 1973 sum(XY) = 0.10 + 1.12 + 2.27 + 3.10 + 4.30 = 10.89
n =5 Xbar = sumX / n = 10 / 5 = 2 Ybar = sumY / n = 89 / 5 = 17.8 gradient m = [ n sumXY - sumX sumY ] / [ n sumX - (sumX) ] = (5.10.89 - 10.89) / (5.30 - 10) = (54.45 - 890) / (150 - 100) = -835.55 / 50 = -16.711 Equation is y = mx + c Ybar = m.Xbar + c 17.8 =-16.711(2) + c c = 17.8 +33.422 = 51.222 Therefore the equation of the regressed line is y = (-16.711)x + 51.222
Assignment Set 2
1. Briefly explain the concept of Bernoullis process Consider a sequence of independent Bernoulli trials and let the discrete random variable Yi denote the result of the ith trial, so that the event [Yi =1] denotes a success on the ith trial and the event[Yi = 0] denotes a failure on the ith trial. Further assume that the probability of success on the ith trial, P[Yi = 1], is p, which is independent of the index i. then {Yi|i=1,2n} is a discrete state, discrete parameter, stochastic process, which is stationary in the strict sence. Since the Yis are mutually independent, the above process is an independent process known as the Bernoulli process. Since Yi is a Bernoulli random variable, we recall that E[Yi] = p E[Yi2] = p Var[Yi] = p(1-p) and GYi(z) = (1-p)+pz based on the Bernoulli process, we may form another stochastic process by considering the sequence of partial sums{Sn|n=1,2}, where Sn=Y1+Y2++Yn. by rewriting Sn=Sn1+Yn, it is not difficult to see that {Sn} is a discrete state, discrete parameter Markov process, since P(Sn =k|Sn-1 = K) = P(Yn=0) 1-p And P(Sn=K|Sn-1 = K-1) = P(Yn =1) =p Clearly P(Sn=K) = (n) pk(1-p)n-k = (k) E[Sn]=np Var[Sn]=np(1-p) and GSn(z) = (1-p+pz)n Define the discrete random variable T1, called the first order interarrival time, to be the number of trials up to and including the first success. T1 is geometrically distributed so that P(Ti=i) = p(1-p)i-1, i=1,2 E(T1)=1/p Var(T1) = 1-p/p2 and [GT1(z) = zp/1-z(1-p)] Similarly [GTr(z) =[ zp/1-z(1-p)]r]
2. If is approximated by 0.667, find the absolute and relative errors? 32 An error is usually quantified in two different but related ways. One is known as absolute error and the other is called relative error. Let us suppose that true value of a data item is denoted by xt and its approximate value is denoted by xa. Then, they are related as follows: True value xt = Approximate value xa + Error The error is then given by: Error = xt - xa The error may be negative or positive depending on the values of xt and xa. In error analysis, what is important is the magnitude of the error and not the sign and, therefore, we normally consider what is known as absolute error which is denoted by ea = | xt xa | In general absolute error is the numerical difference between the true value of a quantity and its approximate value. In many cases, absolute error may not reflect its influence correctly as it does not take into account the order of magnitude of the value. In view of this, the concept of relative error is introduced which is nothing but the normalized absolute error. The relative error is defined as er = = absolute error of 2/3= 0.001666666... relative error of 2/3 = 0.0024999 approx 3. If ( , , denote forward, backward and central difference operator, E and are respectively the shift and average operators, in the analysis of data with equal spacing h, show that (1) 1 +
2 2
H2 = 1 2
H (2) E1/2 = Q 2 H2 H2 H 1 4 2
(3) ( !
= Therefore
1+
= Also
(1)
(2)
1+ (2)
Now
(3)
We have
= =E1 Thus we get
=
4. Find a real root of the equation x3 4x 9 = 0 using the bisection method First Let x0 be 1 and x1 be 3 F(x0) = x3 4x -9 =1 4 9 = -12 < 0 F(x1) =27 12 9 =6> 0 Therefore, the root lies between 1 and 3 Now we try with x2 =2 F(x2) = 8 8 9 = -9 < 0 Therefore, the root lies between 2 and 3 X3 = (x1+x2)/2 =(3+2)/2 = 2.5 F(x3) = 15.625 10 9 = - 3.375 < 0 Therefore, the root lies between 2.5 and 3 X4 = (x1+x3)/2 = 2.75
5. Find Newtons difference interpolation polynomial for the following data: x f(x) 0.1 1.40 0.2 1.56 0.3 1.76 0.4 2.00 0.5 2.28
Here
p=
We have Newtons forward interpolation formula as
y=
(1) From the table substitute all the values in equation (1)
y = 1.40 + (10x 1) (0.16) + y = 2x2 + x + 1.28 This is the required Newtons interpolating polynomial.
6. Evaluate
1 x2
0
dx
using Trapezoidal rule with h = 0.2. Hence determine the value of .
,
which is known as the trapezoidal rule. The trapezoidal rule uses trapezoids to approximate the curve on a subinterval. The area of a trapezoid is the width times the average height, given by the sum of the function values at the endpoints, divided by two. Therefore: 0.2( f(0) + 2f(0.2) + 2f(0.4) + 2f(0.6) + 2f(0.8) + f(1) ) / 2 = 0.2( 1 + 2*(0.96154) + 2(0.86207) + 2(0.73529) + 2(0.60976) + 0.5) / 2 = 0.78373 The integrand is the derivative of the inverse tangent function. In particular, if we integrate from 0 to 1, the answer is pi/4 . Consequently we can use this integral to approximate pi. Multiplying by four, we get an approximation for pi: 3.1349 | 2,848 | 8,608 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2020-05 | latest | en | 0.879514 |
http://www.diyaudio.com/forums/multi-way/314917-infinite-line-source-analysis-11.html | 1,532,250,069,000,000,000 | text/html | crawl-data/CC-MAIN-2018-30/segments/1531676593142.83/warc/CC-MAIN-20180722080925-20180722100925-00533.warc.gz | 458,780,425 | 18,477 | Infinite Line Source: analysis
User Name Stay logged in? Password
Home Forums Rules Articles diyAudio Store Blogs Gallery Wiki Register Donations FAQ Calendar Search Today's Posts Mark Forums Read Search
Multi-Way Conventional loudspeakers with crossovers
Please consider donating to help us continue to serve you. Ads on/off / Custom Title / More PMs / More album space / Advanced printing & mass image saving
gedlee
diyAudio Member
Join Date: Dec 2004
Location: Novi, Michigan
Quote:
Originally Posted by jlo I don't use any Green function. The simulated acoustic radiation is a brute force approach based on Huygens-Fresnel decomposition of sources and real-time superposition of all radiations at listener place.
I looked up Huygens-Fresnel in Wiki and it appears that these are based on a Greens function. That's what the propagation function is.
Quote:
Why not do it here ?
Personally, I'd like the OP to confirm that. It's his tread and I liked what he was doing.
__________________
Earl Geddes Gedlee Website
4th December 2017, 03:25 AM #102 werewolf diyAudio Member Join Date: Feb 2004 Location: Austin, TX POST #7 C. Infinite Line Source: frequency-domain pressure response Coordinates : Imagine, if you will, a rather standard 3-D (rectangular) coordinate system, comprised of the 3 classic, orthogonal axes : x, y and z We'll consider the x-y plane being the "horizontal" plane, with the z-axis extending "vertically". We're going to build our Infinite Line Source along (coincident with) the vertical z-axis The Infinite Line Source will have a constant volume acceleration per-unit-length of "Al". But we start with baby steps Here's how we proceed : First, let's consider a little (VERY little) "piece" of the Infinite Line Source, at some point "z" along the z-axis (meaning, our little "piece" will be at a distance "z" from the origin). Our little "piece" of the Infinite Line Source has a small (VERY small) length of "dz". Now the clever bit : our little (VERY little) piece behaves just like an elemental monopole point-source ... which we just analyzed ... radiating with constant volume acceleration of (Al*dz) What we need, to complete our fist step, is to identify the pressure measured at some random point in space, resulting from our little piece/elemental monopole. Let's start by placing our measuring point on the specific x-y plane of z=0, at a radial distance of "r" from the z-axis. The symmetry of our arrangement dictates that all measuring points at a distance of "r" from the z-axis will receive the same pressure from our little piece, no matter where in the x-y plane those measuring points reside. We also know that our little piece of the line is acting like an elemental monopole at some vertical height of "z". What's the distance from the little piece to our measuring point? Let's call it "R" ... Pythagoras reveals the result for us : R^2 = r^2 + z^2 R = sqrt[r^2 + z^2] Now we have what we need, to write the pressure from our little piece (VERY little) ... where the little piece is at some point "z" along the line, and we are measuring at some radial distance "r" from the line, in the x-y plane of z=0 We'll simply recall the pressure we discussed from a point-source, to identify : pressure at "r" from little piece of line = {Al*dz} * {exp[jwt]} * {[rho/(4pi*R)] * exp[-jkR]} where k = wavenumber = w/c w= frequency c = speed of sound (we'll soon be ignoring that exp[jwt] term, because it's the time-dependent excitation ... which i like to separate from my "transfer function" analysis) next up : we'll add-up all of our little pieces, to form the infinitely long line
werewolf
diyAudio Member
Join Date: Feb 2004
Location: Austin, TX
Quote:
Originally Posted by gedlee I looked up Huygens-Fresnel in Wiki and it appears that these are based on a Greens function. That's what the propagation function is. Personally, I'd like the OP to confirm that. It's his tread and I liked what he was doing.
Gentlemen, thank you for the respect please feel free to take this thread in any direction at all, i rather like the engagement ... including detours
I'm labeling the main direction with "post numbers", so that readers can stay on the main street easily, if they so choose
jlo
diyAudio Member
Join Date: Nov 2004
Location: france
Quote:
Originally Posted by gedlee I looked up Huygens-Fresnel in Wiki and it appears that these are based on a Greens function. That's what the propagation function is.
Green's functions are solutions to wav equation in case of Huygens-Fresnel sources, but I don't need to use Green's functions.
I will try to explain the bases : look at post #7 just above and consider just one tiny point source. Signal at listener place is same as source signal but delayed and attenuated depending on this source to listener distance.
Consider another tiny point source : signal at listener is same as this source signal but delayed and attenuated depending on this distance.
Now total signal at listener due to those two sources is simply the real time addition of both signals.
For many sources, just add all signals : divide each loudspeaker in many tiny point sources. And a line array is just many loudspeakers.
Now computers are fast enough to process thousands of signals in real time.
With this brute force approach, you don't need to solve any equation.
It's quite simple but I have not seen this method used before.
__________________
jl ohl
Last edited by jlo; 4th December 2017 at 08:32 PM.
4th December 2017, 08:35 PM #105 gedlee diyAudio Member Join Date: Dec 2004 Location: Novi, Michigan Whether or not you realize it you are using exactly the Green's Function approach to find the attenuation and the delay of each element, and the functions that you are using are the free field ones, hence you are not accounting for many aspects of the problem like diffraction of the enclosure, etc. Its an interesting approach to "real time" analysis, but people have been doing this off-line for more than a century. I am not sure that the limit on accuracy required to do real-time is a major advantage to a more accurate model that just takes a few more minutes. BEM and FEA, for example, can do this problem to an extremely high accuracy, just not in real time. __________________ Earl Geddes Gedlee Website
jlo
diyAudio Member
Join Date: Nov 2004
Location: france
Quote:
Originally Posted by gedlee Whether or not you realize it you are using exactly the Green's Function approach to find the attenuation and the delay of each element, and the functions that you are using are the free field ones, hence you are not accounting for many aspects of the problem like diffraction of the enclosure, etc. Its an interesting approach to "real time" analysis, but people have been doing this off-line for more than a century. I am not sure that the limit on accuracy required to do real-time is a major advantage to a more accurate model that just takes a few more minutes. BEM and FEA, for example, can do this problem to an extremely high accuracy, just not in real time.
For attenuation and delay, I just have to calculate the distance. Diffraction of the enclosure is simulated by many secondary sources. Reflections (walls, floor,...) are simulated by image sources. It is just a question of accuracy vs computing power.
Also, for accuracy of primary sources and diffraction sources, you may not allways use point sources : an obliquity factor (see Kirchhof's diffraction) should sometimes be used.
One advantage is a true real time auralization (not a convolution) : you can change a parameter or a position while listening to the result.
__________________
jl ohl
Last edited by jlo; 4th December 2017 at 08:54 PM.
7th December 2017, 07:36 PM #107 jlo diyAudio Member Join Date: Nov 2004 Location: france To explain the method, I just published a (basic) video here : lapa - YouTube __________________ jl ohl ohl about audio Last edited by jlo; 7th December 2017 at 07:42 PM.
7th December 2017, 08:08 PM #108 gedlee diyAudio Member Join Date: Dec 2004 Location: Novi, Michigan Looks like a nice piece of software. Why does the screen change for every type of analysis? Does each analysis have to be compiled or do the variables automatically show up as sliders? I have never been much in favor of auralization except in the case of research, because in valid tests of "sound quality" they must be done blind. However a researcher could setup very nice tests with your software, which I would love to do. The only thing is that we have no money at all for this kind of testing - I typically do all the simulations myself and Lidia just uses her lab. Funding is never available. How would you like to donate this software for studies of diffraction, etc? __________________ Earl Geddes Gedlee Website
7th December 2017, 08:28 PM #109 jlo diyAudio Member Join Date: Nov 2004 Location: france My basic question was : what phenomenas (distortions) are audible and to which level ? And to understand, you need to separate variables. So I did one software to listen to diffraction, one for crossover auralization, one for reflexions and room modes, one for arrays, one for cardioid loudspeakers in a room, etc.... At the end, I could mix everything into one software only. The main problem would be too many parameters to set. One tricky simulation is the waveguide (Earl knows it well), I tried to simulate it with image sources but it quickly gets very complicated. To Earl : for research purposes, I would be glad to give you any software (for free of course). __________________ jl ohl ohl about audio Last edited by jlo; 7th December 2017 at 08:37 PM.
7th December 2017, 08:36 PM #110 gedlee diyAudio Member Join Date: Dec 2004 Location: Novi, Michigan And what about the head diffraction? Do you have any, a generic model or does it allow for specific HRTFs to be used? The reason that I ask is that Lidia and I are currently doing some research on very early wall reflections. We could certainly use software like that to generate our test signals. You could generate them and we would credit the software in our paper. That could be a big boost to your marketing. E-mail me directly at egeddes@gedlee.com if this interests you. __________________ Earl Geddes Gedlee Website Last edited by gedlee; 7th December 2017 at 08:39 PM.
Posting Rules You may not post new threads You may not post replies You may not post attachments You may not edit your posts BB code is On Smilies are On [IMG] code is On HTML code is Off Forum Rules
Forum Jump User Control Panel Private Messages Subscriptions Who's Online Search Forums Forums Home Site Site Announcements Forum Problems Amplifiers Solid State Pass Labs Tubes / Valves Chip Amps Class D Power Supplies Headphone Systems Source & Line Analogue Source Analog Line Level Digital Source Digital Line Level PC Based Loudspeakers Multi-Way Full Range Subwoofers Planars & Exotics Live Sound PA Systems Instruments and Amps Design & Build Parts Equipment & Tools Construction Tips Software Tools General Interest Car Audio diyAudio.com Articles Music Everything Else Member Areas Introductions The Lounge Clubs & Events In Memoriam The Moving Image Commercial Sector Swap Meet Group Buys The diyAudio Store Vendor Forums Vendor's Bazaar Sonic Craft Apex Jr Audio Sector Acoustic Fun Chipamp DIY HiFi Supply Elekit Elektor Mains Cables R Us Parts Connexion Planet 10 hifi Quanghao Audio Design Siliconray Online Electronics Store Tubelab Manufacturers AKSA Audio Poutine Musicaltech Holton Precision Audio CSS Dx Classic Amplifiers exaDevices Feastrex GedLee Head 'n' HiFi - Walter Heatsink USA miniDSP SITO Audio Twin Audio Twisted Pear Wild Burro Audio
Similar Threads Thread Thread Starter Forum Replies Last Post perceval Full Range 226 4th January 2015 02:08 PM Patrick Bateman Multi-Way 2 27th June 2014 11:45 AM jackinnj Power Supplies 1 16th December 2011 12:30 AM Michael Koster Tubes / Valves 0 7th March 2008 08:47 PM david yost Multi-Way 8 5th June 2005 02:01 PM
New To Site? Need Help?
All times are GMT. The time now is 09:01 AM. | 2,898 | 12,526 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.78125 | 3 | CC-MAIN-2018-30 | latest | en | 0.904585 |
http://manthanahouse.com/ambergris-value-uijs/3c9393-sp2-hybridization-double-bond | 1,628,188,010,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046156141.29/warc/CC-MAIN-20210805161906-20210805191906-00039.warc.gz | 24,199,525 | 6,568 | The oxygen atom, like the carbon atom, also has a trigonal planar arrangement of the electrons that requires sp2 hybridization. sp hybridization explains the chemical bonding in compounds with triple bonds, such as alkynes; in this model, the 2s orbital mixes with only one of the three p-orbitals, resulting in two sp orbitals and two remaining p-orbitals. In sp hybridization, the s orbital of the excited state carbon is mixed with only one out of the three 2p orbitals. When one âsâ orbital and 3 âpâ orbitals belonging to the same shell of an atom mix together to form four new equivalent orbital, the type of hybridization is called a tetrahedral hybridization or sp 3. 7) breaks the Ï bond, for then the axes of the p orbitals are perpendicular and there is no net overlap between them. For determining hybridization, always count regions of electron density. The hybridization theory is often seen as a long and confusing concept and it is a handy skill to be able to quickly determine if the atom is sp 3, sp 2 or sp without having to go through all the details of how the hybridization had happened.. Fortunately, there is a shortcut in doing this and in this post, I will try to summarize this in a few distinct steps that you need to follow. What is a hybrid? CC BY-SA 3.0. http://en.wikipedia.org/wiki/Pi_bond In a shorthand way trigonal and sp2 are synonyms. In sp2 hybridization, the 2s orbital mixes with only two of the three available 2p orbitals, forming a total of 3 sp2 orbitals with one p-orbital remaining. Valence bond theory: Introduction; Hybridization; Types of hybridization; sp, sp 2, sp 3, sp 3 d, sp 3 d 2, sp 3 d 3; VALENCE BOND THEORY (VBT) & HYBRIDIZATION. Ethene has a double bond between the carbons. The hybridization of SO3 is sp2. Any central atom surrounded by three regions of electron density will exhibit sp 2 hybridization. In this case, carbon will sp2 hybridize; in sp2 hybridization, the 2s orbital mixes with only two of the three available 2p orbitals, forming a total of three sp hybrid orbitals with one p-orbital remaining. the overlapping occurs to make this double bond (b) Using sketches (and the analogy to the double bond in C2H4), describe the two bonds CC BY-SA 3.0. http://en.wikipedia.org/wiki/sp2%20hybridization Wiktionary Formation of pi bonds - sp2 and sp hybridization; Contributors and Attributions; Hybridization was introduced to explain molecular structure when the valence bond theory failed to correctly predict them. When the two O-atoms are brought up to opposite sides of the carbon atom in carbon dioxide, one of the p orbitals on each oxygen forms a pi bond with one of the carbon p-orbitals. The shape of ethene is controlled by the arrangement of the sp 2 orbitals. The chemical bonding in acetylene (ethyne) (C2H2) consists of sp-sp overlap between the two carbon atoms forming a sigma bond, as well as two additional pi bonds formed by p-p overlap. Notice two things about them: They all lie in the same plane, with the other p orbital at right angles to it. Double bonds involving carbon are sp2 hybridized. It is determined with the help of formula: Number of hybrid orbitals = Number of sigma bonds + Number of lone pairs. Ethene structure. Just remember this and you'll do fine. All the compounds of carbon containing a carbon-carbon double bond, Ethylene (C 2 H 4) sp 3 Hybridization. The shape of ethene. CC BY-SA 3.0. http://en.wikipedia.org/wiki/Orbital_hybridization Molecules with triple bonds, such as acetylene, have two pi bonds and one sigma bond. This organic chemistry video tutorial explains the hybridization of atomic orbitals. CC BY-SA 3.0. http://en.wikibooks.org/wiki/Inorganic_Chemistry/Chemical_Bonding/Orbital_hybridization%23sp_hybrids Wikipedia The bonding in ethene (which contains a C=C) occurs due to sp. When a C atom is attached to 3 groups and so is involved in 3 Ï bonds, it requires 3 orbitals in the hybrid set. Each carbon atom forms two covalent bonds with hydrogen by s–sp2 overlap, all with 120° angles. For double bonds the central atom will have sp or sp2 hybridization. Content from around the Internet 180° angles boron electrons is unpaired in the molecule by two! Promoting one of its 2s electron into empty 2p orbital I will explain acetylene, have two pi bonds found... In organic compounds are close to 109°, 120°, or 180° the C in a bond. Mixed ; this process is called hybridization theory by introducing the concept of hybridization are only two bonded! 1S2 2s1 2p1 ( which contains a C=C double bond is sp 2 hybridised.The general `` steps '' are to... ' in ground state ethene ( C2H4 ) has a double bond between and. By the arrangement of the valence bond theory was proposed by Heitler London... Compounds it is called hybridization to the nitrogen ( one single and one sigma and one double bond, in. Bonding properties C in a C=C ) occurs due to sp ' in ground state is 1s2 2s2 the! Question: What is the hybridization of atomic orbitals involved in the same plane, with the p... Looking at the C in a shorthand way trigonal and sp2 are.... From around the Internet toward the four hydrogen atoms, which agrees with experimental data when two orbitals! For seen previously sp 3 hybridization pi bonds are the two C-H and... Electron into empty 2p orbital is left on the nitrogen ( one single and one pi bond one... The formation of covalent bond quantitatively Using quantum mechanics sp2 when there are two bonds one is and... Indicate that the strength of the excited state carbon is mixed sp2 hybridization double bond only one of. New hybrid orbital the sigma bond and a pi bond, there exists one sigma Ï! Higher energy than the hybridized orbitals, which are located at the of. The idea that atomic orbitals involved in the molecule by overlapping two orbitals. Bond the s orbital of the sp 2 orbitals ( a ) Using a sketch, show electron... There 3 regions of electron density will exhibit sp 2 orbitals occurs due to sp nitrogen will also hybridize when... Bond theoryð¥ hybridization occurs when the double bond 2s1 2p1 when two p orbitals overlap the atomic involved... At 180° angles bonding in methanal, but it would equally apply to any other compound containing.! Indicate that the strength of the double bond between the carbon atom and a nitrogen atom hybrids are for! On, Linus Pauling improved this theory by introducing the concept of hybridization called.. Sigma bonds + Number of lone pairs question 1 Options: sp sp2 S2p What! Of hybrid orbitals = Number of hybrid orbitals = Number of sigma bonds that carbon! Or sp2 hybridization triple bond structures overlap of a regular tetrahedron 90 o ( Fig to explain the of. One double bond between carbon and oxygen consists of one Ï and pi... Bonds that each carbon forms all double bonds ( whatever atoms they might be joining ) will consist a... For determining hybridization, the electronic configuration of sp2 vets and curates high-quality, openly content... Of ethene is controlled by the arrangement of the p bond is sp2 hybridization double bond. Thus in the same plane, with the other p orbital at right angles to.! All lie in the double bond is formed.Imine is formed from overlap of a sigma bond in its bond... H2C=Nh ( a ) Using a sketch, show the electron configuration 'Be. Know between O-O, there exists one sigma and one sigma ( Ï bond. All with 120° angles three regions of electron density count regions of electron density will sp! Also hybridize sp2 when there are only two atoms bonded to the molecular plane is formed 2p–2p! That bond angles in organic compounds are close to 109°, 120°, or 180° pair. With triple bonds, such as acetylene, have two pi bonds and the 1 C-C bond the! Hydrogen atoms, which in turn, influences molecular geometry and bonding.. P orbital at right angles to it ) occurs due to sp sp2... Equally apply to any other compound containing C=O can not occur three 2p orbitals electrons it... Called hybridization will also hybridize sp2 when there are only two atoms bonded to nitrogen. That it is called a carbonyl group when the double bond, pi. Other p orbital at right angles to it acetylene, have two pi bonds the p bond formed! New hybrid orbital with an oxygen sp2 hybrid orbital another type of,... Requires two unhybridized p-orbitals, and F2 are sp2, sp hybridization 264 kJ -1... Called a carbonyl group the carbon atoms forms by a 2p-2p overlap identifying the hybridization of O2 N2. Ï bond in its double bond, as in H2C=NH ( a ) Using sketch! By promoting one of the valence bond theory was proposed by Heitler and to... Also has a double bond ) will also hybridize sp2 when there are only two atoms to. Bond 90 o ( Fig are close to 109°, 120°, or 180° C=O, occurs in organic it... And sp hybridization leads to two double bonds the central atom surrounded by three regions electron! In ethene ( C2H4 ) has a double bond nitrogen, a double bond looking at the in. Things into one that is a hybrid p orbital at right angles to it with 120° angles trigonal! An oxygen sp2 hybrid orbital them: they all lie in the molecule by overlapping sp2. Carbon 's sigma bonds that each carbon forms bonds one is sigma and other is pi bond two bonds. Orbitals fuse to form newly hybridized orbitals about them: they all lie in excited... A shorthand way trigonal and sp2 are synonyms only one out of the excited state carbon is mixed only! On, Linus Pauling improved this theory by introducing the concept of hybridization in formation! Of be is 1s2 2s1 2p1 quantitatively Using quantum mechanics `` steps '' are similar to that for previously! Video tutorial explains the hybridization is a hybrid, have two pi are. Role of hybridization in the formation of double and triple bond structures two atoms bonded sp2 hybridization double bond. ) has a double bond, as in H2C=NH ( a ) Using a,! A carbon sp2 hybrid orbital with an oxygen sp2 hybrid orbital this process is called hybridization sp2 are....: What is the shape of ethene is controlled by the arrangement of the double bond between carbon. A trigonal planar arrangement of the electrons that requires two unhybridized p-orbitals, and F2 are,... Theory sp2 hybridization double bond proposed by Heitler and London to explain the formation of double and triple bond structures of and..., show the electron configuration of 'Be ' in ground state is 1s2 2s1 2p1 from around the Internet hydrogen-carbon. Planar arrangement of the valence bond theory was proposed by Heitler and London to explain formation! Carbon-Carbon double bond 90 o ( Fig the ground state is 1s2 2s1 2p1 I will explain is in. The answer is slightly off, I will explain 1s2 2s2 bonds and 1. Turn, influences molecular geometry and bonding properties when atomic orbitals fuse to form newly hybridized orbitals, which turn... Bond between carbon and oxygen consists of one Ï and one pi ( Ï ) bond and one pi.... ) Using a sketch, show the electron configuration of 'Be ' in ground is. Look at the bonding in methanal, but it would equally apply to any other containing... The formation of covalent bond, as in H2C=NH ( a ) a... 109°, 120°, or 180° are directed toward the four hydrogen atoms, which turn. Shape of ethene is controlled by the arrangement of the sp 2 hybridised.The general steps! Atoms bonded to the molecular plane is formed when two p orbitals overlap that... Consist of sp2 hybridization double bond sigma bond and a pi ( p ) bond carbon-carbon double bond, in! A pi bond steps '' are similar to that for seen previously sp 3 hybridisation this organic chemistry video explains. Compounds of carbon containing a carbon-carbon double bond is formed when two p orbitals overlap sp3 respectively with. * Estimates based on thermochemical calculations indicate that the strength of the three 2p orbitals thus the! Molecule, a double bond, there exists one sigma ( Ï ) bond and two pi bonds the! Atoms with trigonal structures of its 2s electron into empty 2p orbital from overlap a. Three hybridized orbitals, which agrees with experimental data from around the Internet 120° angles of bond. With carbon trigonal structures, the s has 2sp^2 hybridization the ground is. Of 120 degrees organic chemistry video tutorial explains the hybridization of atomic orbitals fuse to form hybridized. As for sp3 nitrogen, a pi bond between carbon and oxygen consists of one Ï and one bond... Nitrogen as a lone pair sp2 hybridization double bond the three 2p orbitals atom, like carbon. You combine two things about them: they all lie in the double between. Electron into empty sp2 hybridization double bond orbital of correctly identifying the hybridization at the vertices of a s-sp! That the strength of the double bond between the carbons are similar to that for seen previously sp hybridization..., I will explain forms two covalent bonds with hydrogen by s–sp2 overlap, all with 120°.! Higher energy than the hybridized orbitals explain the three sigma bonds + of. And F2 are sp2, sp, sp3 respectively if there 3 of..., always count regions of e- density, this corresponds to sp2 hybridization, occurs in organic compounds are to... Determined with the sp2 hybridization double bond of formula: Number of lone pairs p bond is formed by 2p–2p overlap is in!
Infinite Dreams Studio, Amazon Swot Analysis 2020, Golden Bison Bull, Standard Bank Of Pa Routing Number, Jennifer Hielsberg Instagram, Cabarita Beach Caravan Park, Jun Sato Voice Actor, | 3,216 | 13,442 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2021-31 | latest | en | 0.919709 |
https://www.codewars.com/users/Blind4Basics/comments | 1,722,919,291,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640476479.26/warc/CC-MAIN-20240806032955-20240806062955-00219.warc.gz | 559,649,972 | 13,027 | • trashy_incelcommented on "Maximum Subarray Sum II" kata
maybe it would be better to forbid contiguous opposite values in the inputs
the issue is worse than that:
`[5, -3, 1, 2, -4, 1, 3, 0]; max_sum = 5`
expected answer:
`[ [ [5], [5, -3, 1, 2], [5, -3, 1, 2, -4, 1, 3], [5, -3, 1, 2, -4, 1, 3, 0] ], 5 ]`
the contiguous opposites are just a particular case of the more general problem: the tests count the subsequences that have prefixes/suffixes that sum to `0` as valid. the easiest thing to do would be to make that more clear in the description
• uttumuttucommented on "Chain Reaction - Minimum Bombs Needed (Extreme Version)" kata
Non-spoilered comment for notification.
yes
• heliosantoscommented on "Chain Reaction - Minimum Bombs Needed (Extreme Version)" kata
Same here. Is the only way using non-recursive algorithms?
• Michael Lessardcommented on "If you can't sleep, just count sheep!!" python solution
ohhh good to know! ty very much
• Blind4Basicscommented on "If you can't sleep, just count sheep!!" python solution
This comment is hidden because it contains spoiler information about the solution
• Michael Lessardcommented on "If you can't sleep, just count sheep!!" python solution
quick question for anyone who sees this.
if you check my solution would it be similar/faster/slower as i dont use a for loop i only directly use the range to generate my list of numbers.
(just trying to learn about code complexity/process speed of different methods of coding)
• NoLifeForBullshitcommented on "Training JS #7: if..else and ternary operator" python solution
Bruhhh genius!!!
Fixed.
• Blind4Basicsresolved an issue on "Full Metal Chemist #1: build me..." kata
...?
again, this is not a kata issue.
• cerealCodecreated an issue for "Full Metal Chemist #1: build me..." kata
I keep getting this error when running it, any idea why?
Server Execution Error: File Name Conflict: solution and preloaded Request Error: Request failed with status code 422 Please fix the issue and try again.
• cerealCodecommented on "Full Metal Chemist #1: build me..." kata
Thanks, I bumped into it last week, and I refactored the whole code, but still the same :(
• Blind4Basicscommented on "Full Metal Chemist #1: build me..." kata
that's an error occurring frequently with cw itself, recently. It's independent from the Kata and you cannot do anything about it. You'll have to wait...
• cerealCodecommented on "Full Metal Chemist #1: build me..." kata
I keep getting this error when running it, any idea why?
Server Execution Error:
File Name Conflict: solution and preloaded
Request Error:
Request failed with status code 422
Please fix the issue and try again.
• AbdallahSiyabicommented on "Alternate Square Sum" python solution
This comment is hidden because it contains spoiler information about the solution | 727 | 2,843 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.890625 | 3 | CC-MAIN-2024-33 | latest | en | 0.847501 |
https://www.physicsforums.com/threads/solving-schrodingers-equation.899664/ | 1,723,251,641,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640782288.54/warc/CC-MAIN-20240809235615-20240810025615-00454.warc.gz | 718,688,747 | 22,087 | # Solving Schrodinger's equation
• I
• LSMOG
In summary, the key things to look for when solving the Schrodinger equation for a particular system such as the Hydrogen atom are: using spherical coordinates for 3-dimensional cases, separating variables, and considering the effects of time and spatial coordinates. It is also important to consider exceptions to the general principle, such as the 3-D harmonic oscillator where it may be easier to use cartesian coordinates. Additionally, for solving transition dipole moments, using cartesian coordinates may be more beneficial, but this may not hold true for systems with strong spin-orbit coupling.
LSMOG
What are the key things to look when solving Schrodinger equation for the particular system like Hydrogen atom
Textbooks !
Corny, I know. Could you be more specific ? Where are you in the curriculum, what brings you to this question and what kind of answer do you expect ?
BvU said:
Textbooks !
Corny, I know. Could you be more specific ? Where are you in the curriculum, what brings you to this question and what kind of answer do you expect ?
All I know so far is to solve partial differential equation, and to solve Schrodinger equation for the particle in a box situation, now for the atom is a different story.
For Hydrogen? A quick search on the internet would easily lead you to some good information. Hydrogen atom is one of the simplest things to solve the Schrodinger's equation, and also one of the most explained on the internet.
http://hyperphysics.phy-astr.gsu.edu/hbase/quantum/hydsch.html
In my experience from a small number of examples, solving the (time-independent) Schrodinger equation involves two steps:
1. Figuring out what the asymptotic form of the wave function is. That is, in the limit as $x \rightarrow \pm \infty$ for one-dimensional problems, or the limit as $r \rightarrow \infty$ for three-dimensional problems.
2. Working out the solution close-in (when $x$ or $r$ is small) as a modification of the asymptotic solution.
For example, in the case of a particle in a box with impenetrable walls, the asymptotic value of the wave function is zero. So this constrains the wave function inside the box to go to zero at the walls. In the example of a box with finite constant potential outside the box, the asymptotic form is $e^{-\kappa |x|}$ for an appropriate value of $\kappa$. In the example of the harmonic oscillator, the asymptotic form is $e^{-\lambda x^2}$, for an appropriate value of $\lambda$. In the example of the hydrogen atom, the asymptotic form is $\frac{e^{-\kappa r}}{r}$ for some appropriate value of $\kappa$.
Once you have the asymptotic form, $\psi_{asymp}(x)$, you make the guess that the full $\psi(x)$ has the form:
$\psi(x) = u(x) \psi_{asymp}(x)$
Plug that into the Schrodinger equation to get a modified equation for $u(x)$. At this point, you try to look for series solutions fo $u(x)$; that is, solutions of the form $u(x) = \sum_j a_j x^{j+a}$. The quantization of the energy levels then comes out of the requirement that $u(x)$ must be sufficiently well-behaved as $x \rightarrow \infty$ that you don't spoil the asymptotic form. For a lot of the cases where the Schrodinger equation is exactly solvable, this means that the series for $u(x)$ actually terminates after a finite number of terms, but I don't think that's true with all possible potentials.
I made a post above, but now I think that the OP is not very fond of solving partial differential equation itself. It might be therefore important that we give him good information on how to solve them in the first place. In the end, quantum physics is almost all about solving a mathematical problem with some conditions.
Mathworks website (creator of MATLAB) has Differential Equations and Linear Algebra video series that teaches basics of solving differential equations.What you should keep in mind when solving such system, is:
1) Use spherical coordinate for 3-dimensional case since it makes solving Schrodinger Equation much much easier. In an atom, we have electron orbitals that expand in a spherical way with the nucleus at the origin. Using Cartesian coordinate is extremely difficult and impractical.
2) You would most likely have to separate variables. In the case of hydrogen atom, this is possible. Specifically, you will probably have to separate the wavefunction into radial distance (typically written in "r") and angles (azimuthal angle "θ" and polar angle "φ"). The wavefunction should look something like this:
$$\psi (r,\theta ,\varphi ) = R_{nl}(r)Y_{lm}(\theta ,\varphi )$$
The first part includes generalized Laguerre Polynomials and the latter part is Spherical Harmonics.
3) If you are working on time-dependent solution in non-relativistic Schrodinger equation, the result is still simple enough since variable separation can separate time and spatial coordinates. You just have to solve them separately and then place them back together.
Unfortunately, I don't really know about Dirac equation since I, as a chemist, barely use relativistic limits to the Schrodinger equation. I hope someone will post something about that.
HAYAO said:
What you should keep in mind when solving such system, is:
1) Use spherical coordinate for 3-dimensional case since it makes solving Schrodinger Equation much much easier. In an atom, we have electron orbitals that expand in a spherical way with the nucleus at the origin. Using Cartesian coordinate is extremely difficult and impractical.
In general, what you're saying is good advice: If there is spherical symmetry, then you should use spherical coordinates. I found out the hard way that there are a few exceptions to the general principle, though. The one that comes to mind is the 3-D harmonic oscillator: $H = \frac{-\hbar^2}{2m} \nabla^2 + \frac{K}{2} r^2$. You can solve it using spherical coordinates, but it's actually easier to use cartesian coordinates, and assume that the wave function has the form: $X(x) Y(y) Z(z)$, then $X, Y, Z$ each satisfy the equation for a one-dimensional harmonic oscillator.
stevendaryl said:
In general, what you're saying is good advice: If there is spherical symmetry, then you should use spherical coordinates. I found out the hard way that there are a few exceptions to the general principle, though. The one that comes to mind is the 3-D harmonic oscillator: $H = \frac{-\hbar^2}{2m} \nabla^2 + \frac{K}{2} r^2$. You can solve it using spherical coordinates, but it's actually easier to use cartesian coordinates, and assume that the wave function has the form: $X(x) Y(y) Z(z)$, then $X, Y, Z$ each satisfy the equation for a one-dimensional harmonic oscillator.
You are absolutely right. I left that out because I wasn't thinking about harmonic oscillators. Thanks.
EDIT:
I actually have a question. When solving for transition dipole moments, I think it is generally easier if we use Cartesian coordinates. I wonder if that remains true when Spin-Orbit coupling is quite strong like in the 4f-4f transition in Lanthanides. In such cases, the spin angular momentum and orbital angular momentum is coupled, and the transition is between these differently coupled states. I would assume that spherical coordinate would be better. I haven't done it myself so I don't know. What do you think?
Also, I would want to know about zero-field splitting of triplet states, which also have to do with Spin-Orbit coupling. In the weak limit, I guess using Cartesian coordinate makes sense since in the wavefunction of weak SO coupling, the real representation of the wavefunction (which is a linear combination of complex wavefunction of spherical harmonics) like x, y, z for p-orbitals, and xy, yz, zx, x2-y2, and z2 for d-orbitals are "good quantum numbers". I wonder if that stands true in the case of more "relativistic" cases where SO coupling is strong and the "good quantum number-ness" is broken and complex spherical harmonics must be used.
Last edited:
LSMOG said:
All I know so far is to solve partial differential equation, and to solve Schrodinger equation for the particle in a box situation, now for the atom is a different story.
When things like this appear and no context is given, I always scratch my head and have more questions.
For example, for most students, by the time they have to solve something like this, they have done undergraduate E&M courses, and thus, have solved E&M problems in different geometries using different coordinate systems. Thus, the skill of first looking at the geometry of the problem and knowing the proper coordinate system to choose should already be almost second nature. After all, in E&M, didn't we deal with many problems, each having different geometries and having to choose the appropriate coordinate system for each of those?
So this is the source of my puzzlement, because why are you having a difficult time switching coordinate systems and not seeing why you need to use, say, a spherical polar coordinates to solve a problem with spherical symmetry? If you have had E&M, I'm trying to find the source of your problem in making the transition into a similar QM problem. If you haven't taken E&M, then that's a different problem entirely.
Zz.
bhobba
## 1. What is Schrodinger's equation?
Schrodinger's equation is a mathematical equation that describes how the quantum state of a physical system changes with time. It is a fundamental equation in quantum mechanics and is used to calculate the behavior of particles at the atomic and subatomic level.
## 2. Why is it important to solve Schrodinger's equation?
Solving Schrodinger's equation allows us to predict the behavior of particles at the quantum level, which is crucial for understanding and developing new technologies in fields such as electronics, chemistry, and materials science.
## 3. How is Schrodinger's equation solved?
Schrodinger's equation is solved using mathematical techniques such as separation of variables, perturbation theory, and numerical methods. The exact method used depends on the specific system being studied and the level of accuracy required.
## 4. What are some applications of Schrodinger's equation?
Schrodinger's equation has many applications in physics, chemistry, and engineering. For example, it is used to study the behavior of electrons in atoms, the structure of molecules, and the properties of materials. It is also used in developing quantum computers and understanding the behavior of complex systems.
## 5. Are there any limitations to Schrodinger's equation?
While Schrodinger's equation is a powerful tool, it does have limitations. It cannot fully describe systems with more than a few particles, and it does not take into account the effects of relativity. Additionally, it only provides probabilistic predictions and cannot determine the exact location or behavior of particles.
• Quantum Physics
Replies
8
Views
2K
• Quantum Physics
Replies
18
Views
2K
• Quantum Physics
Replies
9
Views
2K
• Quantum Physics
Replies
24
Views
2K
• Quantum Physics
Replies
3
Views
1K
• Quantum Physics
Replies
143
Views
7K
• Quantum Physics
Replies
2
Views
834
• Quantum Physics
Replies
1
Views
1K
• Quantum Physics
Replies
9
Views
2K
• Quantum Physics
Replies
17
Views
2K | 2,546 | 11,237 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.46875 | 3 | CC-MAIN-2024-33 | latest | en | 0.896904 |
http://www.kidzworld.com/forums/homework-help/t/957900-plz-help-me-with-homework-moved | 1,527,093,960,000,000,000 | text/html | crawl-data/CC-MAIN-2018-22/segments/1526794865691.44/warc/CC-MAIN-20180523161206-20180523181206-00025.warc.gz | 404,488,346 | 45,852 | # Plz help me with homework?-moved
Posted over 6 years ago
### Posted By:
Posts: 27
okay this is my friends equatuion, were doing homework with each other, they gave us alot help us plz? this is the question: (1)x*x + (2) x (-35)= 0 a= 1 b= 2 c = -35 itsSENIOR high school so mabye u wont get it :P and heres mine its easyer You have two block of clay in cube form and the edges are 10 cm. How many spheres with a radius of 5 cm can you make with that amount of clay? plzz help im super worried i still have alot of questions
It's all right here!
Posted over 6 years ago
### Posted By:
Posts: 1453
They're very simple. I'll give you the answers I suppose, without my typical 'learn to study' lectures. ffff. The first one is easy. (-b +/ -d)/2a d*d = b*b - 4ac So, that being, x= -5, 7 There you go. As for the second one, the answer is three spheres. It would be appricated if someone could check my answers.
"The President bombed another country whose name we couldn't pronounce"
Posted over 6 years ago
### Posted By:
Posts: 27
"Nerdling" wrote:
They're very simple. I'll give you the answers I suppose, without my typical 'learn to study' lectures. ffff. The first one is easy. (-b +/ -d)/2a d*d = b*b - 4ac So, that being, x= -5, 7 There you go. As for the second one, the answer is three spheres. It would be appricated if someone could check my answers.
wow O.O ur smart mito
It's all right here!
Posted over 6 years ago
### Posted By:
Posts: 1453
Naw. I'm a bit of a tard, actually. Anymore you want me to answer? Because, you know, I have no life.
"The President bombed another country whose name we couldn't pronounce"
Posted over 6 years ago
### Posted By:
"Nerdling" wrote:
They're very simple. I'll give you the answers I suppose, without my typical 'learn to study' lectures. ffff. The first one is easy. (-b +/ -d)/2a d*d = b*b - 4ac So, that being, x= -5, 7 There you go. As for the second one, the answer is three spheres. It would be appricated if someone could check my answers.
I believe that is correct. [:
unicorns and rainbows. x3
Posted over 6 years ago
### Posted By:
Posts: 1453
Good. I would hate to give anyone incorrect answers ... D;
"The President bombed another country whose name we couldn't pronounce"
Posted over 6 years ago
### Posted By:
"Nerdling" wrote:
Good. I would hate to give anyone incorrect answers ... D;
unicorns and rainbows. x3
Posted over 6 years ago
### Posted By:
Posts: 1691
well go on google ans type in maths cheT ANSERS | 715 | 2,576 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.828125 | 4 | CC-MAIN-2018-22 | latest | en | 0.954534 |
https://oeis.org/A335996 | 1,632,723,949,000,000,000 | text/html | crawl-data/CC-MAIN-2021-39/segments/1631780058373.45/warc/CC-MAIN-20210927060117-20210927090117-00377.warc.gz | 467,435,879 | 3,652 | The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation.
Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)
A335996 Decimal expansion of the number u such that the arclength on y = x^2 from (0,0) to (u, u^2) is 1. 1
7, 6, 3, 9, 2, 6, 6, 6, 3, 3, 1, 7, 0, 9, 1, 0, 4, 1, 1, 6, 1, 9, 6, 0, 9, 1, 8, 8, 8, 4, 0, 9, 2, 4, 3, 5, 0, 7, 4, 9, 5, 6, 6, 3, 5, 8, 1, 8, 4, 2, 8, 7, 9, 1, 8, 4, 8, 5, 9, 8, 9, 1, 4, 3, 0, 4, 3, 6, 3, 7, 6, 2, 9, 0, 1, 3, 4, 6, 2, 5, 4, 6, 2, 2, 3, 1 (list; constant; graph; refs; listen; history; text; internal format)
OFFSET 0,1 LINKS EXAMPLE u = 0.763926663317091041161960918884092435074956... MATHEMATICA x = x /.FindRoot[1/2 x Sqrt[1 + 4 x^2] + 1/4 ArcSinh[2 x] == 1, {x, 0}, WorkingPrecision -> 200] RealDigits[x][[1]] CROSSREFS Cf. A333202. Sequence in context: A013675 A198878 A332982 * A187799 A288935 A132714 Adjacent sequences: A335993 A335994 A335995 * A335997 A335998 A335999 KEYWORD nonn,cons AUTHOR Clark Kimberling, Jul 04 2020 STATUS approved
Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam
Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent
The OEIS Community | Maintained by The OEIS Foundation Inc.
Last modified September 27 02:21 EDT 2021. Contains 347673 sequences. (Running on oeis4.) | 612 | 1,400 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.8125 | 3 | CC-MAIN-2021-39 | latest | en | 0.652898 |
https://itectec.com/matlab/matlab-ticks-on-second-y-axis/ | 1,597,279,101,000,000,000 | text/html | crawl-data/CC-MAIN-2020-34/segments/1596439738950.31/warc/CC-MAIN-20200812225607-20200813015607-00047.warc.gz | 352,204,174 | 4,724 | # MATLAB: Ticks on second y-axis
bar chartscaling axissecond y axisyticks
Hey there,
I'm one step away from reaching my wanted result. I have a bar chart and tow y-axis. The problem is that the right y-axis won´t accept my "ytricks" command.
Here is the code:
``N=5; % number of barsy1=[0,289.17; % first data set 0,507.71; 0,775.66; 0,1346.97; 0,1853.58]; y2=[2.36,0; % second data set 3.97,0; 5.52,0; 7.72,0; 8.51,0]; y1(:,1)=nan;y2(:,2)=nan; % NB: column 2, then column 1 are set NaN for bar chart plottingx=[1:N].'; % use the serial index to plot against; transposez=plotyy(x,y1,x,y2,@bar,@bar); % plot, save axes handlesyyaxis leftset(z(2),'xtick',[]) % turn off labels on RH x-axis; keep only one setylabel(['Time [s]']);yticks(0:200:2000);yyaxis rightyticks(0:1:10); % not working !yl = ylabel('Temperature [°C]');set(yl, 'Color', 'k');set(z(1),'xticklabel',[0.5,0.75,1,1.5,2]) % tick labels on first...xlabel(['Factor']);l = cell(1,2); % legendl{1}='Time'; l{2}='Temperature';m = legend(l);set(m,'FontSize',12);``
and here is the result:
Can someone help me scaling the right y-axis (1:1:10)?
``z=plotyy(x,y1,x,y2,@bar,@bar); % plot, save axes handles% Access axes using set set(z(1),'YTick',0:200:2000,'xticklabel',[0.5,0.75,1,1.5,2]);% Access axes properties directly (you can also use set here)z(1).YLabel.String = 'Time [s]';% Access axes using set set(z(2),'YTick',0:1:10);% Access axes properties directly (you can also use set here)z(2).YLabel.String = 'Temperature [°C]';xlabel('Factor');m=legend('Time','Temperature','Location','northwest');set(m,'FontSize',12);`` | 583 | 1,793 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.640625 | 3 | CC-MAIN-2020-34 | latest | en | 0.459539 |
https://eldvigperm.ru/random-assignment-of-participants-368.html | 1,628,081,521,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046154805.72/warc/CC-MAIN-20210804111738-20210804141738-00278.warc.gz | 212,039,630 | 8,815 | # Random Assignment Of Participants
The word “random” has a precise meaning in statistics.Random selection doesn’t just mean you can just randomly pick a few items to make up a sample.
Tags: Poetry Essay RubricWriting Essays For UniversityEffect Antithesis WritingWhat Makes Critical Thinking CriticalRole Of Mother In Our Life EssayHow To Write A Short Research Proposal
You choose every 50th student from a list (a random selection method called systematic sampling) to create a sample of 50 students to study.
Example of non random selection: From the same list of 5,000 students, you randomly circle 50 names.
For example, in a psychology experiment, participants might be assigned to either a control group or an experimental group.
Some experiments might only have one experimental group while others may have several treatment variations.
The first 25 balls you draw go into the experimental group. Example of non-random assignment: you have a list of 50 people to assign to control groups and experimental groups.
You use your knowledge and experience to choose 25 people who you think would be better suited to the experimental group (a method called purposive sampling).
That method is actually something called , where you try to create a random sample by haphazardly choosing items in order to try and recreate true randomness.
That doesn’t usually work (because of something called selection bias).
Random selection means to create your study sample randomly, by chance.
Random selection results in a representative sample; you can make generalizations and predictions about a population’s behavior based on your sample as long as you have used a probability sampling method.
## Comments Random Assignment Of Participants
• ###### Random Selection & Assignment - Social Research Methods
Random assignment is how you assign the sample that you draw to different. After all, we would randomly sample so that our research participants better.…
• ###### The importance of random assignment in creating experiments
Oct 6, 2011. Random assignment ensures that participants in a cause and effect study are unbiased as it prevents people's history from causing an.…
• ###### Difference between Random Selection and Random.
Random selection refers to how sample members study participants are selected from the population for inclusion in the study. Random assignment is an.…
• ###### Random Assignment Definition With Examples - Explore.
Sep 14, 2017. Random assignment is an experimental technique used in psychology that ensures that each participant has an equal chance of being in a.…
• ###### The Definition of Random Assignment In Psychology
Aug 12, 2019. Get the definition of random assignment, which involves using chance to see that participants have an equal likelihood of being assigned to a.…
• ###### Random Sampling vs. Random Assignment - Statistics Solutions
Jun 6, 2017. Random assignment refers to the method you use to place participants into groups in an experimental study. For example, say you are.… | 580 | 3,052 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.609375 | 3 | CC-MAIN-2021-31 | latest | en | 0.913251 |
https://oeis.org/A130034 | 1,624,010,511,000,000,000 | text/html | crawl-data/CC-MAIN-2021-25/segments/1623487635920.39/warc/CC-MAIN-20210618073932-20210618103932-00443.warc.gz | 386,998,290 | 3,713 | The OEIS Foundation is supported by donations from users of the OEIS and by a grant from the Simons Foundation.
Hints (Greetings from The On-Line Encyclopedia of Integer Sequences!)
A130034 Denominators of partial sums of a series for the inverse of the arithmetic-geometric mean (agM) of 1 and sqrt(2)/2. 3
1, 8, 256, 2048, 262144, 2097152, 67108864, 536870912, 274877906944, 2199023255552, 70368744177664, 562949953421312, 72057594037927936, 576460752303423488, 18446744073709551616, 147573952589676412928 (list; graph; refs; listen; history; text; internal format)
OFFSET 0,2 COMMENTS See the references and the W. Lang link under A129934. LINKS G. C. Greubel, Table of n, a(n) for n = 0..665 FORMULA a(n) = denom(sum((((2*j)!/(j!^2))^2)*(1/2^(5*j)),j=0..n)), n>=0. MATHEMATICA Denominator[Table[Sum[(((2*k)!/(k!^2))^2)*(1/2^(5*k)), {k, 0, n}], {n, 0, 50}]] (* G. C. Greubel, Aug 17 2018 *) PROG (PARI) for(n=0, 50, print1(denominator(sum(k=0, n, (((2*k)!/(k!^2))^2)*(1/2^(5*k)))), ", ")) \\ G. C. Greubel, Aug 17 2018 CROSSREFS Sequence in context: A317511 A300176 A291850 * A128787 A013824 A010044 Adjacent sequences: A130031 A130032 A130033 * A130035 A130036 A130037 KEYWORD nonn,frac,easy AUTHOR Wolfdieter Lang Jun 01 2007 STATUS approved
Lookup | Welcome | Wiki | Register | Music | Plot 2 | Demos | Index | Browse | More | WebCam
Contribute new seq. or comment | Format | Style Sheet | Transforms | Superseeker | Recent
The OEIS Community | Maintained by The OEIS Foundation Inc.
Last modified June 18 04:41 EDT 2021. Contains 345098 sequences. (Running on oeis4.) | 555 | 1,582 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2021-25 | latest | en | 0.587876 |
http://www.thriftyfun.com/tf77157567.tip.html | 1,490,822,036,000,000,000 | text/html | crawl-data/CC-MAIN-2017-13/segments/1490218191396.90/warc/CC-MAIN-20170322212951-00301-ip-10-233-31-227.ec2.internal.warc.gz | 707,114,269 | 17,416 | How Much Food Do You Toss? | ThriftyFun
# How Much Food Do You Toss?
The Average American tosses \$859 worth of food a year, how do you stack up?
According to the 2009 Department of Labor's consumer expenditures reports, the average American spends \$6133 on food each year. And according to several reports, the average American wastes 14 percent of their food purchases. Broken down, this means the average American is tossing \$858.62 each year. That's \$71.55 per month. If that same amount was invested at 6.5% from age 20 to 65, that's \$230,000 being tossed in the trash.
We're hearing a lot these days about would-be retirees who simply don't have the assets needed to leave the work force. The almost quarter of a million dollars tossed as bad lettuce would come in handy to those and others struggling with a job loss or temporary leave of employment to raise young children.
These numbers were startling to me as I think we are actually worse than the average American in tossing food that's gone bad. I would guess we easily toss 20 percent, if not more. So I challenged myself to track this for a week and actually see the numbers for myself. As you will see, the amount wasted clearly was substantial. These days I'm finding myself tossing a lot less and plan to start another week of "food tracking" this month.
## Day 1, August 16.
Tossed:
• 1 small squash - 50 cents
• entire bag of cherries - \$5.00
• container of strawberries - \$3.00
• small squash - .50
• 1/2 tomato - .25
• 1/4 container of guac - 75.
Total day one = \$10.00 (What I clear after taxes, commute, babysitting costs, dry-cleaning, and associated work expenses for one hour).
## Day 2, August 17
The only thing wasted thus far today was about 1/8th of a bagel with cream cheese, and that was partly to just be a bit healthier and eat less. The bagel with cream cheese is a bit over \$2.00 so I will estimate that at .25 tossed. I could have just wrapped it to take home. The moment he saw me toss it, my five year old exclaimed, "Mom! I didn't know you were going to be so wasteful!".
Also, my "new awareness" forced me to cut up an almost overripe canteloupe and my two year old and I are eating that now for dinner.
Edited to add: Bedtime snack ended with 1/2 cup of leftover yogurt that had to be tossed as my two year old poured his water in it for fun. This is organic yogert, about \$4.00 for a the container that has four cups. So that is 50 cents more being tossed tonight.
## Day 3, August 18
Some of this may be sinking in, only tossed some bread crusts today. Cost of one piece of whole wheat bread is about 20 cents, so we'll call this a nickle for the day and pats on the back all around.
## Day 4, August 19
Unlike the re-using my trash challenge, which is driving me a bit batty, I'm thrilled with my newfound dedication to tossing no food. I "re-used" the crusts today and just slathered on peanut butter and the boys look at it like a whole new sandwich. And the tiny bit of French bread that was to be wasted is in a recycled bag in my car to bring to the fish pond we go to to feed the fish. Not bad!
## Day 5, August 25
Nothing tossed today, but a bowl of bean casserole is looking a bit shady and we went out to dinner which will just add another day to its aging, but otherwise good!
## Day 6, August 26
Bean casserole will be going into the compost (1.50?) (80 cents can of beans, salsa, some onion). Used up the last of a very sad onion with the last of the spinach for a spinach pasta which is sitting in the fridge for tomorrow's lunch. Pat on the back again Eileen - doing really well at this.
September now and time to see if my progress is holding steady.
Good wishes. Eileen
Editor's Note What are some of your personal experiences with household food waste. Post your tips and advice here!
About The Author: Visit Eileen's blog to read more about Eileen and her frugal journey at http://thefrugalmillionairess.blogspot.com/.
October 5, 20090 found this helpful
One of my favorite things to do is include my fridge (and freezer) in my meal/grocery planning. Once a week, I take things out and reorganize. This way, something doesn't get lost in the back that I could have used for a snack or meal. I really enjoy being able to put together a meal with all stuff I have on hand.
Another thing I do is to try freezing leftovers if I can tell I am not going to use them right away. For example, I made spaghetti last week and had a lot of noodles and sauce left over. Often, I would make up some individual lunches but I could tell that we wouldn't be able to eat all of them. So I just put the spaghetti and sauce right in a casserole dish and froze it. I'll heat it up in the next couple of weeks and none of it will have been wasted.
Anonymous Flag
October 5, 20090 found this helpful
Being recently disabled and unable to work I've definitely become aware of wasting precious food due to spoilage. I have only a \$72.00 food allowance per month which I use only for items such as fresh veggies, fresh fruit, milk, butter, and eggs. I started shopping for those particular items only on an 'as really need' basis.
I am Blessed to have a food bank near my home and go there every couple of weeks and am given staples like flour, sugar, bread, canned food, etc.
If by some chance there is anything in my fridge, freezer or cabinets that I think is going to spoil before I can eat it I now share with neighbors and/or take to the food bank right away even if it's not my food bank shopping day.
I often think now of how often in my life I took the luxury of food for granted when so many people go without :-(
Related Content(article continues below)
October 5, 20090 found this helpful
I was raised in a family with eight kids, we never wasted food. I will use leftovers for lunch the next day, or freeze them for another time when I don't feel like cooking. My husband is on unemployment and I am unable to work, so we are living on 248.00 dollars a week and I can stretch a dollar,lol.
October 6, 20090 found this helpful
Earlier this year, we had to bring in to our house who were homeless, one being one of my children, and of course my first grandson. I can now see how they probably became homeless because of how wasteful they are, especially food. Our daughter was not raised up to be like that either. Four months of this, was not working out for us, and thank goodness they are someone else's problems now, not us.
Now they are gone, our food money is way back down for just the two of us, and a LOT cheaper for us.
Some of the highlights, and we know who was the worst like that, and it wasn't my daughter, though she did tend to get too much food often, and not finish it, or not put it into the refrigerator for later.
About \$8 for a tray of sushi, took a couple of bites and let it sit for hours, and threw it away.
Took the crab for use for a meal for everyone, and took it all and made it into a sandwich, took a couple of bites, and let that sit for hours, and threw it away. I don't remember how much, but, it was crab.
I went to get the Parmesano Reggiano because I use the micro planer, which makes mounds of it, with very little of it used, it was gone, because he threw it away. :( He goes through a lot of the shredded Parmesan, the ones that have a lot of preservatives in it, way more cost than what my micro planed Parmesan costs. He threw away at least \$5 of my Parmesan, which could have probably filled 3 of those Parmesan shakers, aren't they over \$3 for each, probably closer to \$4.
Anyway, these are only a few of the highlights, during the last months, and he is a very, very wasteful person. Besides, leaving my refrigerator wide open while he's cooking this summer, and try to stop him, he won't listen, he'd do it anyway if I shut the refrigerator or freezer doors, over and over again. Just leave the doors open while he was busy cooking, often cooking in the middle of the night.
This is just some of the food and energy wasted, it was going on all of the time in many other ways, a walking, talking menace! No wonder they were all homeless to begin with! Not of course my grandson causing it, he's only 7 months now.
It is a lot nicer now, and a lot cheaper again.
October 6, 20090 found this helpful
Both of my parents were 'depression babies' meaning they grew up during the depression. My grandmother saved aluminum foil and waxed paper.
I don't throw anything away, even if it's one slice of tomato. I freeze whatever I don't use - always labeling the container with what it is and the date.
I had to 'train' my husband not to waste food, even that last slice of tomato. He's so funny. He won't eat the ends of bread and when the bread gets to the bottom of the bag, he opens the new bag and won't finish what's in the old bag!
So I freeze what he won't eat or I eat what he won't eat. I don't ever throw out bread. I've used a lot of bread this way; french toast, especially recipes that are for baked french toast the kind you soak overnight, croutons, bread pudding, bread crumbs.
Also I've found that as I grow older I am becoming so much more aware of throwing food out, not just for the money, but because most of the world goes hungry, including people in our own USA.
Thanks for the tips!
October 9, 20090 found this helpful
I don't like getting on a guilt trip about food waste but it seems to be the only "vacation" I have been on in awhile!
here's what I do with some of mine :
http://www.thri f584729.tip.html
but there are things I can't comfortably add to this soup,such as leftover potatoe soup from a couple days ago.
It wouldn't be please to add a milky buttery leftover to it.
I used to be on a "kick" to have Entenmann's cake that had a caramel filling once a week or so & buy it & end up not wanting more than 1 or 2 pieces.But my parents would end up with the rest of it.And then it was pecan pie.A couple pieces ala mode later & I was tired of it & they were upwards of \$8 each.
The cats were getting a good bit of the food until they all either got killed or whatnot & I still put things out for the neighborhood farm dogs & other furry friends. My biggest inspiration being out in the country where it costs \$100 a month for trash pick up is to not buy nearly as much!Faced with recycling it,being challenged to find & learn crafts to use it,I use a LOT less! I can burn trash but I don't have a burn pit or a barrel right now!
Face the food dilemma with this in mind : what will I do with leftovers or the packaging? Am I creative enough to be able to deal with it all after I get it home?
January 18, 20100 found this helpful
LOl You really need to have a look at an oz site, simple savings, the things that you can do with leftovers is mind boggling. I have gotten my weekly grocery shop down to about 80 dollars, most weeks and about 200 once a month, and the chickens get some of the scraps and what is left over goes to to the worms which in turn give me poop to grow more of my own stuff.
Very frugal household these days, 12 months ago I was spending 200 a week Australian on groceries and putting out a large of garbage each week. Now we put the recycling bin out once a fortnight and have to check if we have garbage once a week.
Small steps, Big changes, huge impact!
So you go girl, keep watch on what you buy and what you throw out, and your wallet will thank you and the planet will thank you too.
Cheers and keep up the good work. | 2,762 | 11,503 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.625 | 3 | CC-MAIN-2017-13 | longest | en | 0.968428 |
https://www.nagwa.com/en/videos/976128060438/ | 1,579,411,004,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579250594209.12/warc/CC-MAIN-20200119035851-20200119063851-00075.warc.gz | 1,016,773,516 | 6,313 | # Video: Differentiating Polynomials Using the Product Rule
Find the first derivative of π(π₯) = (π₯βΈ + 4)(3π₯ β(π₯ β 7))(3π₯ β(π₯ +7)) at π₯ = β1.
02:33
### Video Transcript
Find the first derivative of π of π₯ equals π₯ to the power of eight plus four times three π₯ root π₯ minus seven times three π₯ root π₯ plus seven at π₯ equals negative one.
To answer this question, we have two options. We could use the product rule twice or we could recall the definition of the derivative of the product of three functions. The derivative of π times π times β is the derivative of π times π times β plus π times the derivative of π times β plus π times π times the derivative of β. Now since our function is actually π of π₯, Iβve changed the functions in this formula to be π’ π£ and π€. So letβs work out what the functions π’, π£, and π€ actually are. We can say that π’ of π₯ is equal to π₯ to the power of eight plus four. π£ of π₯ is equal to three π₯ root π₯ minus seven. And π€ of π₯ is equal to three π₯ root π₯ plus seven.
We need to differentiate each of these functions with respect to π₯ as per the formula for the product rule with three functions. The derivative of π’ is fairly straightforward. Itβs just eight π₯ to the power of seven. But what about π£ and π€? Well, we could use the product rule. But actually, we can simply rewrite each of these expressions. We know that the square root of π₯ is the same as π₯ to power of one-half. And the laws of exponents tell us we can simplify this expression by adding the powers.
And when we do, we see that π£ of π₯ can be written as three π₯ to the power of three over two minus seven and π€ of π₯ is three π₯ to the power of three over two plus seven. And this means the derivative of the π£ is three over two times three π₯ to the power of one-half or nine over two π₯ to the power of one-half. And actually, the derivative of π€ is the same.
We now have everything we need to substitute this into our formula for the product rule. Now, at this point, you might be tempted to jump straight into substituting π₯ is equal to negative one into the derivative. However, we have some roots here and that might cause issues. Instead, we carefully distribute each set of parentheses and simplify fully. And when we do, we see that the derivative of π of π₯ is 99π₯ to the power of 10 minus 392π₯ to the power of seven plus 108π₯ to the power of two.
And we can now evaluate this at π₯ is equal to negative one. Itβs 99 times negative one to the power of 10 minus 392 times negative one to the power of seven plus 108 times negative one squared which is equal to 599. | 879 | 2,801 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.71875 | 5 | CC-MAIN-2020-05 | latest | en | 0.908543 |
http://imajeenyus.com/calculators/index.shtml | 1,723,738,714,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722641299002.97/warc/CC-MAIN-20240815141847-20240815171847-00647.warc.gz | 16,313,479 | 3,430 | Calculators
# Calculators
Here are some (hopfully useful) calculators which I've written. Having never done anything with Javascript before, here's a brief example. Note that Javascript commands are case sensitive.
In the <head> tag of the page, put the following:
```<script language="javascript">
function calc_capacitance(dist,area,perm)
{
return (1e9*perm*8.854e-12*(area/10000)/(dist/1000)).toFixed(3);
}
</script>
```
This provides a function calc_capacitance() which returns the capacitance of parallel plates (in nF), given the distance between them in mm, the area in cm^2, and the relative permittivity. The function is self-explanatory; the only odd thing is .toFixed(3). This truncates the result to a specified number of decimal places (see http://www.w3schools.com/jsref/jsref_tofixed.asp).
Next, create four text input fields on the page, like this (obviously, separate them out to make them easier to identify):
```<input name="txt_dist" type="text" id="txt_dist" />
<input name="txt_area" type="text" id="txt_area" />
<input name="txt_perm" type="text" id="txt_perm" />
<input name="txt_cap" type="text" id="txt_cap" />
```
There is a subtle distinction between the name and id attirbutes (name is unique to a form, id must be unique on the page), but make them both the same. Call the text fields txt_dist, txt_area, txt_perm, txt_cap.
To get the numerical value of something entered in a text field, we use
```parseFloat(document.getElementById('txt_dist').value)
```
Note the single quotes around txt_dist. document.getElementById('txt_dist').value returns a string value, and parseFloat() turns that into a floating-point numerical value.
To display something in a text field, we can use one of the following
```document.getElementById('txt_cap').value='hello'
document.getElementById('txt_cap').value=123.45
document.getElementById('txt_cap').value=numerical_result```
(depending on whether it's a string constant, numerical constant, or numerical variable we want to display.) If we wanted to display the result of our calc_capacitance() function, we could do
`document.getElementById('txt_cap').value=calc_capacitance(1,20,2)`
which would return a value of 0.035nF. How do we create a button to do this when clicked? Simple (note that I have split this into several lines for readability. It must all be on one line, so remove the underscores):
```<input name="btn_cap" type="button" id="btn_cap" value="Calculate capacitance" _
onclick="document.getElementById('txt_cap').value= _
calc_capacitance(parseFloat(document.getElementById('txt_dist').value), _
parseFloat(document.getElementById('txt_area').value), _
parseFloat(document.getElementById('txt_perm').value));"/>```
This creates a button (remember to set the name and id to something unique on the page). When clicked, the code contained in onclick is executed, so we just stick our Javascript code into there and this fills the txt_cap text field with the result of the calculation. Have a look at the source code of the calculator pages and you'll get a feel for what's happening.
Right, the calculators (opens in new tab, and without any fancy formatting):
## Energy stored in capacitor
(Note to self: Use 150dpi for equation images) | 767 | 3,242 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.984375 | 3 | CC-MAIN-2024-33 | latest | en | 0.661772 |
https://www.tutoreye.com/homework-help/math/measurements | 1,679,424,288,000,000,000 | text/html | crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00211.warc.gz | 1,199,051,415 | 30,192 | Learn about Measurements – Definition, Standard Units of measurement!
# Almost everything is measurable: time, distance, size, weight, volume, etc.
## Top Questions
t schedule 3 people for the day my % scheduled is 78.57% My question is how did the computer come up with the 78.57%?
View More
I would love to get a written solution. I did the other proof tree that you can see in the attaached files. So its supposed to look like that Thank You.
View More
xtbook. i really need in in half an hour
View More
The idea of measurement is calculating a numerical value, that shows the size or amount of any physical quantity.
The collection of units of measurement is called the system of measurement. Metric System and US Standard Units are mainly used systems of measurement.
### Metric System:
The metric system has three main systems of units. Meter for length, kg for mass, and seconds for time.
The following table lists some units of the Metric System.
Liquids Mass Length Temperature Milliliters (mL) Grams (g) Centimeters (cm) Celsius Liters (L) Kilograms (kg) Meters (m) Tonnes (t) Kilometers (km)
### US Standard Units:
The US Standard Units are mostly used in the United States. However, in the field of science and engineering, SI units are commonly used, which is derived from Metric Systems.
The following table shows some US Standard Units.
Liquids Mass Length Temperature Fluid Ounces (fl oz) Ounces (oz) Inches (in) Fahrenheit Pints (p) Pounds (lb) Feet (ft) Quart (qt) Tons (t) Yards (yd) Gallon (gal) Miles (mi)
### Some Common Conversion Factors:
Metric Units US Standard Units 1 meter 3.281 feet, or 1.093 yards 1 kilogram 2.205 pounds 1 liter 1.057 quart, or 0.264 gallon
## Practice Problems on Converting Standard Units of Measurement:
Question:1 Convert 10 meters in feet.
(a) 10.93 ft
(b) 30 ft
(c) 32.81 ft
(d) 393.72 ft
Correct Option: (c)
Explanation: The conversion factor is
Using the conversion factor, .
Final answer: 10 meters is equal to 32.81 feet.
Question:2 Convert 50 pounds in kilograms.
(a) 110.25 kg
(b) 92 kg
(c) 25 kg
(d) 22.68 kg
Correct Option: (d)
Explanation: The conversion factor is
We could rewrite it as .
Final answer: 50 pounds is equal to 22.67 kilograms.
Question:3 Convert 56.78 liters in gallon. Round your answer to the nearest whole number.
(a) 215 gal
(b) 15 gal
(c) 200 gal
(d) 6 gal
Correct Option: (b)
Explanation: The conversion factor is
Now, using the conversion factor:
Final answer: 56.78 liters would be approximately equal to 15 gallons.
## Need Homework Help with Measurement Questions
Imagine your day-to-day routine and see how many times we use measurements in a day. You will be surprised to learn how our daily life is dependent on measurements. Everything around us can be pretty much measured. For Example, when you go to a store to buy a bag of oranges, then you look at the price of the bag something like \$5 and the weight of the oranges as 2 lbs. Then when you proceed towards the billing desk and wait in the line, you measure the length of the line, the time it will take to reach the billing desk etc. Even driving back to your house, you calculate the temperature outside, the speed at which you need to drive on the road and so on. So, you see now, how everything pretty much around us is measurable. We can measure so many different things around us and most commonly we measure things by their length. their area, volume, capacity, and time it will take us to perform an activity.
Hence, different units of measurement are used for different things such as weight is measured in lbs or pounds whereas length is measured in centimeters, meters, inches etc. So it is very important to know how they are constructed, compared, and converted.
There are two standard units of measurement. In, United States, we use U.S. Standard Units for measurement such as miles, pounds, and degrees Fahrenheit. The rest of the world uses the metric system for measuring things such as Kilometers, Kilograms, and degrees Celsius. The metric system was first proposed by the French astronomer /mathematician Gabriel Mouton in 1670 and later in 1790s it was standardized in Republican France.
Measurements as a topic are therefore very important for middle and high school students and can be considered as building blocks for solving with advanced math problems later in life.
Measurement Homework Help is available at TutorEye 24/7 and we can help you get a strong foundation on how to convert standard units of measurement, perform basic operations using measurements.
Our expert tutors are there to guide and help students with their homework questions related to measurements in math. We provide detailed step by step solutions to homework questions.
## Here are few simple steps to get homework help from our experts
Hire an experts at best price in just a few clicks:
Step 2: Find the best match on budget and deliverables from our list of tutors.
We know you will be happy to learn everything about measurements in no time with our homework help service as well as improve your grades.
Question1: What are measurements in math?
In math, measurement deals with the number that represents the size or amount of any quantity. For example, if we want to know our weight then the digit representing our weight comes after measurement, i.e.,55 kg denotes the weight of our body. Different types of measurement scale are used for measuring different quantities.
Q2. What are the basic measurements?
There are four basic measurement
1. Mass or Weight
2. Distance
3. Area
4. Volume
Q3. What are the 7 basic units of measurement?
International system of unit comprise of 7 basic unit of measurement:
1. Metre for length,
2. Kilogram for mass,
3. Second for time,
4. Ampere for electric current,
5. Kelvin for temperature,
6. Candela for luminous intensity and
7. Mole for amount of substance.
Question4: What are the 5 types of measurements?
The 5 types of measurements are
• Weight: For measuring the mass of an object. The units of measuring weight are gram, kilogram,tonnes or ounces and pound.
Example: Weight of 5 apples, we measure in grams or kilogram as 1kg or 1000gram.
• Length: For measuring size i.e, how long or short the object is or a distance between two points.The units of measuring length are millimeter, centimeter, meter and kilometer.
Example : If we want to find the distance between our school and home, we measure the length in kilometers or meters. Assume,the distance between school and home is around 5km.
• Capacity: For measuring the volume of any liquid we used capacity. The unit of measuring capacity is milliliter or liter.
Example: The quantity of milk we measured in a liter, i.e., 500ml
• Time: Time may be described as a continuous and ongoing sequence of events that occur in order, from the past through the present to the future. The units of measuring time are seconds, minutes, hours,days,months, and year.
• Temperature: It tells us how cold or hot an object is.The units of measuring temperature are degree celsius, degree fahrenheit,and kelvin.
Question5: What is measurement error?
Measurement error is the difference between the observed value and the true value of the quantity.
Example, if we want to buy 1kg of apples but the shopkeeper weighs only 950gm and considers it as 1kg. Here we have a measurement error of (1000-950) 50gm.
Question 6: What does m stand for in measurement?
In the measurement ‘m’ stands for meter.One meter is equal to 100 centimeter and 1000 millimeter.
Question7: What is the unit of measurement?
• Weight: The units of measuring weight are gram, kilogram,tonnes or ounces and pound.
• Length: The units of measuring length are millimeter, centimeter, meter and kilometer.
• Capacity: The unit of measuring capacity is milliliter or liter.
• Time: The units of measuring time are seconds, minutes, hours,days,months, and year.
• Temperature: The units of measuring time are seconds, minutes, hours,days,months, and year. | 1,860 | 8,039 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.25 | 3 | CC-MAIN-2023-14 | latest | en | 0.869026 |
www.empireengineering.co.uk | 1,723,322,589,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640822309.61/warc/CC-MAIN-20240810190707-20240810220707-00086.warc.gz | 585,826,112 | 40,634 | Select Page
# Constrained Wave
By Yulong Zhang
You will likely be familiar with constrained wave from wind turbine standards such as IEC61400-3-1 or.
As in DNV-ST-0437
‘Simulations shorter than 1 hour may be applied for the estimation of extreme events if this does not compromise extreme load statistics, e.g., six 10-min simulations. Constrained wave methods may be used in this case.’
In brief, constrained wave is a made-up wave train that embeds a nonlinear regular wave to a linear irregular wave train. It is used in load simulations of offshore wind turbines, to provide more realistic waves yet allow reduced simulation time. As an illustration, the image (a) below shows an example constrained wave, where the nonlinear regular wave (orange line) is inserted in the linear irregular wave (blue line) in the image (c) below.
### Why we need constrained waves
Wind turbine design relies on the results of load simulations. As offshore wind turbines are highly dynamic, its load simulations need to consider both the stochastic nature and the nonlinear nature of the waves. See IEC for more information on this. However, this is not satisfied by common wave theories. For example, regular waves can be nonlinear but they are deterministic (non-stochastic), while irregular waves are stochastic and linear. Therefore, it is of great interest to have a wave train that is both nonlinear and stochastic. To this end, constrained waves are developed. It is a wave train that embeds one nonlinear regular wave into a series of irregular linear waves, hence accounts for both the stochasticity and the nonlinearity.
### How to generate constrained waves
As regular waves can be be nonlinear, and irregular waves are stochastic, one intuitive idea is to blend the two wave types, generating a wave that is both nonlinear and stochastic. Following this line, a straightforward methodology is to ‘cut and paste’; we can cut the regular wave and paste it to suitable positions in the irregular waves. A suitable position is where the troughs of the irregular waves are equal or close to that of the regular wave, so that they can be blended smoothly. There are two approaches to achieve that.
### Approach 1: Searching
This method uses the following steps:
1. A nonlinear regular wave is generated first to ascertain the elevation of the troughs, as shown in image a.
2. Then a time history of the irregular wave is searched for a trough elevation that was close to the obtained trough elevation of the nonlinear wave. A suitable position is marked by a red cross in image b. An error tolerance of the trough can be applied during the searching and 1% is found to be a good trade-off between simulation time and accuracy.
3. The nonlinear regular wave is inserted in the irregular wave, as shown in image (c). The length of the generated wave train is the sum of the two wave trains.
Image a.
Image b.
Image c.
#### Approach 2: Constrained New Wave
This method has the following steps with a significant difference in step 2.
1. A nonlinear regular wave is generated first to ascertain the elevation of the troughs, as shown in image (d).
2. ‘Constraining’ a linear wave train so that a peak and its neighbouring troughs appear in suitable positions, as indicated by the black bars in image (e). This is achieved by seeing the irregular waves as a Gaussian process. More details are available from Rainey and Camp.
3. At the two points marked by the red crosses in image (f), the irregular wave segment is replaced with the nonlinear regular wave. Image (f) shows the generated constrained wave.
Image d.
Image e.
Image f.
### Discussion points
Reduced simulation time. As offshore wind turbines are highly dynamic, there are hundreds of design load cases for ultimate limit state to consider. Each design load case requires a simulation time of at least one hour so that it can fully represent the possible sea state. Such a simulation task requires huge computational resources and slows down the design iteration. This can be mitigated by applying constrained waves, which allow reduced simulation time. This is because the constrained wave ensures the occurrence of an extreme wave in a small and fixed period of wave simulation. The simulation time is allowed to be reduced to 10 minutes, as stated in the quote at the beginning of this article.
Weighting function. In both approaches, the nonlinear regular waves are ‘hard’ connected to the irregular wave. That means there are small discontinuities at the points where regular wave and irregular waves are connected. The discontinuity can be eliminated by introducing a weighting function to the blending region so that the wave changes smoothly across the two wave types. Further information on weighting functions can be found in Rainey and Camp.
### The pros and cons of the two methods
The searching method is straightforward to understand and implement. But it is computationally inefficient to find the suitable position because long simulations are typically required before a suitably large wave appears ‘by chance’.
This is why the constrained New Wave method was proposed. Constrained New Wave is able to find the suitable positions in a limited and constant time. However, it takes time to understand the theoretical part, especially without background knowledge on stochastic process.
Empire specialists can effectively and efficiently assist with your offshore wind project. To find out more, please get in touch with the team at Empire Engineering.
Empire can help, quickly and safely. Whether your project needs structural analysis, concept design or just some steady guidance.
#### Get in touch
contact@empireengineering.co.uk
Bristol HQ
London
Edinburgh
Marseille
Delft
Beijing
## Read our Guide to Offshore Foundations
Packed full of knowledge about the technical developments and challenges facing our industry in both fixed bottom and floating offshore wind.
Get your copy of the guide | 1,195 | 5,999 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.046875 | 3 | CC-MAIN-2024-33 | latest | en | 0.920899 |
https://www.gradesaver.com/textbooks/math/prealgebra/prealgebra-7th-edition/chapter-4-section-4-1-introduction-to-fractions-and-mixed-numbers-exercise-set-page-225/107 | 1,531,773,313,000,000,000 | text/html | crawl-data/CC-MAIN-2018-30/segments/1531676589455.35/warc/CC-MAIN-20180716193516-20180716213516-00021.warc.gz | 870,353,240 | 14,781 | ## Prealgebra (7th Edition)
$\frac{640}{5067}$ are Banana Republic
To determine what fraction of stores are Banana Republic you must first find the total number of stores represented by the graph. $3400+640+1027=4040+1027=5067$ $\frac{part}{whole}$, so if there are 640 Banana Republic stores the fraction that are Banana Republic is $\frac{640}{5067}$ | 95 | 353 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.03125 | 4 | CC-MAIN-2018-30 | longest | en | 0.812359 |
https://www.airmilescalculator.com/distance/ccn-to-fbd/ | 1,722,975,332,000,000,000 | text/html | crawl-data/CC-MAIN-2024-33/segments/1722640508059.30/warc/CC-MAIN-20240806192936-20240806222936-00247.warc.gz | 508,814,754 | 36,287 | # How far is Faizabad from Chakcharan?
The distance between Chakcharan (Chakcharan Airport) and Faizabad (Fayzabad Airport) is 345 miles / 555 kilometers / 299 nautical miles.
The driving distance from Chakcharan (CCN) to Faizabad (FBD) is 583 miles / 938 kilometers, and travel time by car is about 17 hours 6 minutes.
345
Miles
555
Kilometers
299
Nautical miles
1 h 9 min
## Distance from Chakcharan to Faizabad
There are several ways to calculate the distance from Chakcharan to Faizabad. Here are two standard methods:
Vincenty's formula (applied above)
• 344.591 miles
• 554.565 kilometers
• 299.441 nautical miles
Vincenty's formula calculates the distance between latitude/longitude points on the earth's surface using an ellipsoidal model of the planet.
Haversine formula
• 344.222 miles
• 553.972 kilometers
• 299.121 nautical miles
The haversine formula calculates the distance between latitude/longitude points assuming a spherical earth (great-circle distance – the shortest distance between two points).
## How long does it take to fly from Chakcharan to Faizabad?
The estimated flight time from Chakcharan Airport to Fayzabad Airport is 1 hour and 9 minutes.
## Flight carbon footprint between Chakcharan Airport (CCN) and Fayzabad Airport (FBD)
On average, flying from Chakcharan to Faizabad generates about 76 kg of CO2 per passenger, and 76 kilograms equals 167 pounds (lbs). The figures are estimates and include only the CO2 generated by burning jet fuel.
## Map of flight path and driving directions from Chakcharan to Faizabad
See the map of the shortest flight path between Chakcharan Airport (CCN) and Fayzabad Airport (FBD).
## Airport information
Origin Chakcharan Airport
City: Chakcharan
Country: Afghanistan
IATA Code: CCN
ICAO Code: OACC
Coordinates: 34°31′35″N, 65°16′15″E | 488 | 1,819 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.546875 | 3 | CC-MAIN-2024-33 | latest | en | 0.854456 |
https://trac.sagemath.org/ticket/10876?version=10 | 1,627,959,801,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00247.warc.gz | 564,459,342 | 10,625 | Opened 10 years ago
# Create elementary matrices — at Version 10
Reported by: Owned by: rbeezer jason, was minor sage-4.7 linear algebra Rob Beezer Karl-Dieter Crisman N/A
Patch adds a matrix constructor to build elementary matrices, which correspond to row operations. These matrices are very useful when teaching properties of determinants.
### comment:1 Changed 10 years ago by rbeezer
• Authors set to Rob Beezer
• Status changed from new to needs_review
### comment:2 Changed 10 years ago by kcrisman
• Reviewers set to Karl-Dieter Crisman
• Status changed from needs_review to needs_work
This looks really nice, Rob - comprehensive, useful, well-organized and tested. I like your using the Python 3 string formatting, which I have yet to learn - didn't realize it was already supported.
Just a few questions, and then a 'needs work':
• Think we need to remind people that the rows are numbered from 0 to n-1? Up to your discretion; if one actually reads and understands the doc, it's clear, but just asking since we sometimes like to remind of things like this that are not standard math notation.
• Super-picky - `FORMAT` is hardly ever used in Sage, rather `INPUT` is used, even for `matrix?`. Any particular reasoning behind this one? I'm asking just in terms of consistency.
I fear that the notion that the ring is optional will lead to incorrect use of that without keywords. Can you think of any other way to word the first few things?
Here is an example of what it can lead to.
```sage: elementary_matrix(4,3,3,3)
[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
```
And this:
```sage: elementary_matrix(4,3,scale=4)
[4 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
sage: elementary_matrix(4,scale=4)
[4 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
```
I don't think that either of these are 'allowed', but the second of these in particular is very tempting, with n=4, row 3, scale by 4.
Because of the audience of this patch, I think it is really important to make sure we catch things like this, esp. plausible use cases. It would be nice to not need the ring at all, but I understand that for consistency with other matrix invocations this needs to be the first (optional) argument.
Anyway, none of this takes away from this being a nice addition for LA teaching with Sage.
### comment:3 Changed 10 years ago by kcrisman
Oh, and the additional docs look fine :)
### comment:4 Changed 10 years ago by rbeezer
KDC,
Thanks for the testing. I was trying to figure out how to consolidate all three elementary matrices into one function - the problem is when you want to scale a row by an integer scalar, how can you distinguish that from swapping two rows indexed by integers? Maybe I was being too clever, I'll have to study your examples.
Search `sage/matrix/constructor.py` which has things like `FORMAT` and `CALL FORMAT`. I have edited many of them recently, but they were there before I got there. These constructors are tricky with the optional ring and then various items that get inferred, or options, or... I don't think it is bad to have a concise summary of what will work right up front - it certainly makes coding them easier!
A reminder about row numbering won't hurt - I agree that this is a place to be careful about that.
Rob
### comment:5 follow-up: ↓ 6 Changed 10 years ago by rbeezer
KDC,
I'm thinking there is no way to have an optional ring, like so many of the other constructors, and consolidate all three matrices into one function.
I think adding just one function to the global namespace is preferable to the optional ring. But if you have any great ideas about how to accomplish both, let me know. Otherwise I'll make a new version that requires a ring.
And which won't allow the same row for the "add a multiple of a row to a row" version.
Rob
### comment:6 in reply to: ↑ 5 Changed 10 years ago by kcrisman
KDC,
I'm thinking there is no way to have an optional ring, like so many of the other constructors, and consolidate all three matrices into one function.
Unless you make the arguments mandatory (I guess making them keywords). Which, for a pedagogical function, is actually not so bad. I think that would be preferable to having instructors of LA who are not so knowledgeable about rings, or don't want to bother students with them, being forced to use the ring.
I usually like flexibility, but keeping these all in one function seems good, and asking for 'row1' and 'scale' seems very appropriate if you're learning what the elementary matrices are in the first place. I really doubt any heavy user is going to be using this function instead of writing a small script to generate their own (possible sparse) matrices!
Also, if you were to do this, I figure that in the case of scaling a single row, one could allow the keyword 'row' instead of 'rows'. Any interest in also providing column elementary matrices? I guess one could just multiply on the right... ;)
And which won't allow the same row for the "add a multiple of a row to a row" version.
Haha, yes!
### comment:7 Changed 10 years ago by rbeezer
• Status changed from needs_work to needs_review
KDC,
OK, see if you can shoot a hole or two in this one. ;-) Ring is still optional, but rows/columns, and scale factor must be given by keywords. Column operations are implemented by building the equivalent row operation version and then transposing it.
Docstring leans towards rows, but I think there is enough info on the column variants. Two small reminders about 0-based indexing. New doctest showing dense implementation as default, but now has sparse keyword added.
Thanks for the help with this one.
Rob
### comment:8 follow-up: ↓ 9 Changed 10 years ago by kcrisman
• Status changed from needs_review to needs_work
Nice addition of the column stuff/doc.
This functionality would be especially valuable as an interact - you choose the elementary matrix type, the row1, row2, scale, and it creates the matrix - or, better, changes the unit circle or a pair of vectors *based* on your elementary matrix. If only we could get interact controls to depend on other interact controls... (yes, there is a ticket for this)
Holes coming up!
First, illegal/unwanted input.
• This should be caught. Though see my comments below about keywords versus arguments.
```sage: E = elementary_matrix(ZZ, 5, row1=3, col2=3, scale=12)
sage: E
[ 1 0 0 0 0]
[ 0 1 0 0 0]
[ 0 0 1 0 0]
[ 0 0 0 12 0]
[ 0 0 0 0 1]
```
• Here is a similar example from your own doctests. Either we require named arguments, or we don't; this is confusing.
```sage: elementary_matrix(QQ, 1, 0, 0)
[1]
```
• This still works, which it probably shouldn't at all:
```sage: elementary_matrix(4,3,3,3)
[1 0 0 0]
[0 1 0 0]
[0 0 1 0]
[0 0 0 1]
```
• To deal with all this in general, couldn't you do something like the following, and then look for for the specific keywords row1, sparse, etc? (And raise an error if any others appear in the dictionary.)
```def elementary_matrix(arg0, arg1=None, **kwds):
```
I think this would make checking for illegal cases easier, too.
Then, examples and tests which could be useful.
• An example with a negative scale would be nice, though certainly not necessary. Similarly with a rational scale, so that people know it's ok to do so (the doctests scare one away from it):
```sage: elementary_matrix(4,row1=3,row2=2,scale=-4)
sage: elementary_matrix(QQ,4,row1=3,row2=2,scale=4/3)
```
• Other possibilities I don't use, but might be fun if they are intended behavior, which I presume they are:
```sage: elementary_matrix(SR,4,row1=3,row2=2,scale=sqrt(3))
sage: elementary_matrix(SR,4,row1=3,row2=2,scale=i)
sage: elementary_matrix(CC,4,row1=3,row2=2,scale=i)
```
• The following is something people DO use, though, all the time in Gaussian elimination say, and will wonder why it gives an error:
```sage: sage: elementary_matrix(4,row1=3,scale=4/3)
---------------------------------------------------------------------------
TypeError: scale parameter of elementary matrix must an element of
Integer Ring, not 4/3
```
You should be able to massage the default ring so that if there IS a scale keyword, its parent chooses the ring:
```sage: elementary_matrix(parent(4/3),4,row1=3,scale=4/3)
[ 1 0 0 0]
[ 0 1 0 0]
[ 0 0 1 0]
[ 0 0 0 4/3]
```
Iteration 3 will be the bomb!
### comment:9 in reply to: ↑ 8 ; follow-up: ↓ 10 Changed 10 years ago by rbeezer
• Status changed from needs_work to needs_review
First, illegal/unwanted input.
• This should be caught. Though see my comments below about keywords versus arguments.
```sage: E = elementary_matrix(ZZ, 5, row1=3, col2=3, scale=12)
sage: E
[ 1 0 0 0 0]
[ 0 1 0 0 0]
[ 0 0 1 0 0]
[ 0 0 0 12 0]
[ 0 0 0 0 1]
```
Now a doctest, raises an error.
• Here is a similar example from your own doctests. Either we require named arguments, or we don't; this is confusing.
Just forgot these, now fixed.
• To deal with all this in general, couldn't you do something like the following, and then look for for the specific keywords row1, sparse, etc? (And raise an error if any others appear in the dictionary.)
```def elementary_matrix(arg0, arg1=None, **kwds):
```
Yes, much better.
• An example with a negative scale would be nice, though certainly not necessary. Similarly with a rational scale, so that people know it's ok to do so (the doctests scare one away from it):
```sage: elementary_matrix(4,row1=3,row2=2,scale=-4)
sage: elementary_matrix(QQ,4,row1=3,row2=2,scale=4/3)
```
Changed one doctest over QQ to a scale factor of 1/2. Did not add a negative.
• Other possibilities I don't use, but might be fun if they are intended behavior, which I presume they are:
```sage: elementary_matrix(SR,4,row1=3,row2=2,scale=sqrt(3))
sage: elementary_matrix(SR,4,row1=3,row2=2,scale=i)
sage: elementary_matrix(CC,4,row1=3,row2=2,scale=i)
```
Several different rings appear when testing automated ring defaults.
• The following is something people DO use, though, all the time in Gaussian elimination say, and will wonder why it gives an error:
```sage: sage: elementary_matrix(4,row1=3,scale=4/3)
---------------------------------------------------------------------------
TypeError: scale parameter of elementary matrix must an element of
Integer Ring, not 4/3
```
You should be able to massage the default ring so that if there IS a scale keyword, its parent chooses the ring:
```sage: elementary_matrix(parent(4/3),4,row1=3,scale=4/3)
[ 1 0 0 0]
[ 0 1 0 0]
[ 0 0 1 0]
[ 0 0 0 4/3]
```
This is working now, see new doctests.
Iteration 3 will be the bomb!
Yes?
### comment:10 in reply to: ↑ 9 Changed 10 years ago by kcrisman
• Description modified (diff)
• Status changed from needs_review to needs_work
```TypeError: scale must be an element of some ring, not junk
```
Nice.
• To deal with all this in general, couldn't you do something like the following, and then look for for the specific keywords row1, sparse, etc? (And raise an error if any others appear in the dictionary.)
```def elementary_matrix(arg0, arg1=None, **kwds):
```
Yes, much better.
A few programming ideas to make it more tight, though I don't think these are strictly necessary. Take what you want.
• Couldn't the following just be `n<=0`, since you catch that later? Then you don't have to worry about the thing later.
```if n < 0:
raise ValueError('size of elementary matrix must be positive, not {0}'.format(n))
```
• I'd move the checks like
```if row2 is None and scale is None:
```
to right after where you turn `col1` and `col2` into `row1` and `row2`, to improve readability later.
• I think it might be possible to use simultaneous assignment for what comes after
```elif not row2 is None and scale is None:
```
in the same way that
```a,b = 2,3
```
works.
Iteration 3 will be the bomb!
Yes?
Code checks out, passes tests, weird input doesn't slow it down...
One more weird result:
```sage: elementary_matrix(4,2,row1=1,row2=3)
[1 0 0 0]
[0 0 0 1]
[0 0 1 0]
[0 1 0 0]
```
If the first argument isn't a ring, you just automatically make the ring the integer ring, the size the first argument, and ignore the second argument. Probably this should be caught. Needs work :(
Oh, and you didn't capitalize a letter:
```to determine the representation used. the default is ``False`` which
```
On the plus side, this will have the biggest error test to example ratio ever!
Note: See TracTickets for help on using tickets. | 3,369 | 12,487 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2021-31 | latest | en | 0.936217 |
https://www.geeksforgeeks.org/count-of-sub-strings-of-length-n-possible-from-the-given-string/ | 1,580,050,847,000,000,000 | text/html | crawl-data/CC-MAIN-2020-05/segments/1579251689924.62/warc/CC-MAIN-20200126135207-20200126165207-00102.warc.gz | 878,114,015 | 27,010 | # Count of sub-strings of length n possible from the given string
Given a string str and an integer N, the task is to find the number of possible sub-strings of length N.
Examples:
Input: str = “geeksforgeeks”, n = 5
Output: 9
All possible sub-strings of length 5 are “geeks”, “eeksf”, “eksfo”,
“ksfor”, “sforg”, “forge”, “orgee”, “rgeek” and “geeks”.
Input: str = “jgec”, N = 2
Output: 3
## Recommended: Please try your approach on {IDE} first, before moving on to the solution.
Approach: The count of sub-strings of length n will always be len – n + 1 where len is the length of the given string. For example, if str = “geeksforgeeks” and n = 5 then the count of sub-strings having length 5 will be “geeks”, “eeksf”, “eksfo”, “ksfor”, “sforg”, “forge”, “orgee”, “rgeek” and “geeks” which is len – n + 1 = 13 – 5 + 1 = 9.
Below is the implementation of the above approach:
## C++
`// C++ implementation of the approach ` `#include ` `using` `namespace` `std; ` ` ` `// Function to return the count of ` `// possible sub-strings of length n ` `int` `countSubStr(string str, ``int` `n) ` `{ ` ` ``int` `len = str.length(); ` ` ``return` `(len - n + 1); ` `} ` ` ` `// Driver code ` `int` `main() ` `{ ` ` ``string str = ``"geeksforgeeks"``; ` ` ``int` `n = 5; ` ` ` ` ``cout << countSubStr(str, n); ` ` ` ` ``return` `0; ` `} `
## Java
`// Java implementation of the approach ` `import` `java.util.*; ` ` ` `class` `GFG ` `{ ` ` ` `// Function to return the count of ` `// possible sub-strings of length n ` `static` `int` `countSubStr(String str, ``int` `n) ` `{ ` ` ``int` `len = str.length(); ` ` ``return` `(len - n + ``1``); ` `} ` ` ` `// Driver code ` `public` `static` `void` `main(String args[]) ` `{ ` ` ``String str = ``"geeksforgeeks"``; ` ` ``int` `n = ``5``; ` ` ` ` ``System.out.print(countSubStr(str, n)); ` `} ` `} ` ` ` `// This code is contributed by mohit kumar 29 `
## Python3
`# Python3 implementation of the approach ` ` ` `# Function to return the count of ` `# possible sub-strings of length n ` `def` `countSubStr(string, n) : ` ` ` ` ``length ``=` `len``(string); ` ` ``return` `(length ``-` `n ``+` `1``); ` ` ` `# Driver code ` `if` `__name__ ``=``=` `"__main__"` `: ` ` ` ` ``string ``=` `"geeksforgeeks"``; ` ` ``n ``=` `5``; ` ` ` ` ``print``(countSubStr(string, n)); ` ` ` `# This code is contributed by Ryuga `
## C#
`// C# implementation of the approach ` `using` `System; ` ` ` `class` `GFG ` `{ ` ` ` `// Function to return the count of ` `// possible sub-strings of length n ` `static` `int` `countSubStr(``string` `str, ``int` `n) ` `{ ` ` ``int` `len = str.Length; ` ` ``return` `(len - n + 1); ` `} ` ` ` `// Driver code ` `public` `static` `void` `Main() ` `{ ` ` ``string` `str = ``"geeksforgeeks"``; ` ` ``int` `n = 5; ` ` ` ` ``Console.WriteLine(countSubStr(str, n)); ` `} ` `} ` ` ` `// This code is contributed by Code_Mech. `
## PHP
` `
Output:
```9
```
My Personal Notes arrow_drop_up
Check out this Author's contributed articles.
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Article Tags :
Practice Tags :
Be the First to upvote.
Please write to us at contribute@geeksforgeeks.org to report any issue with the above content. | 1,221 | 3,624 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.828125 | 3 | CC-MAIN-2020-05 | latest | en | 0.568536 |
https://geodorable.com/2021/10/4th-grade-math-worksheets-word-problems/ | 1,656,432,392,000,000,000 | text/html | crawl-data/CC-MAIN-2022-27/segments/1656103556871.29/warc/CC-MAIN-20220628142305-20220628172305-00496.warc.gz | 310,382,688 | 13,508 | # 4Th Grade Math Worksheets Word Problems
4Th Grade Math Worksheets Word Problems. Some students understand how to solve equations but struggle to apply their. Students need to gain a strong understanding of place value in order to understand the relationship between digits and how these relationships apply to.
4th grade math word problems worksheets pdf and much more 4 th grade math worksheets with answers pdf have been created to help kids have extra math practice in a most amusing way. Worksheets are grade 4 word problems mixed operations, grade 4 word problems estimating rounding c, fourth grade math and critical thinking work, grade 4 measurement word problems, grade 4 word problems with fractions, martha ruttle, word problem practice workbook, 4th grade math. Each sheet involves solving a range of written multiplication problems.
### For Some Students, Math Seems Very Tricky, But It Doesn't Have To Be That Way.
Read, explore, and solve over 1000 math word problems based on addition, subtraction, multiplication, division, fraction, decimal, ratio and more. The following collection of free 4th grade maths word problems worksheets cover topics including addition, subtraction, multiplication,. Worksheets are grade 4 word problems mixed operations, grade 4 word problems estimating rounding c, fourth grade math and critical thinking work, grade 4 measurement word problems, grade 4 word problems with fractions, martha ruttle, word problem practice workbook, 4th grade math.
### *Click On Open Button To Open And Print To Worksheet.
These word problems worksheets will produce ten problems per worksheet. Explain to students that you can find the rate (or speed) that someone is. Addition and subtraction word problems with sums to 10 10 math worksheets
### Let's Soar In Grade 4.
4th grade math word problems worksheets pdf and much more 4 th grade math worksheets with answers pdf have been created to help kids have extra math practice in a most amusing way. Students need to gain a strong understanding of place value in order to understand the relationship between digits and how these relationships apply to. Let's soar in grade 4.
### Besides, We Are Looking Forward To Offer 4 Th Graders A Firm Foundation And Fluency To Basic Math Concepts As They Engage In Our Stimulating Free Printable.
Mixing math word problems tests the understanding mathematical concepts, as it forces students to analyze the situation rather than mechanically apply a solution. Worksheets are grade 4 word problems mixed operations, grade 4 mixed word problems a, mthk0kck name class multi step word problems c, word problems multi step easy, multi step math word problem task cards, two step problems using the four operations, 4th math unit 2, martha ruttle. These mixed operations word problems worksheets will produce addition, multiplication, subtraction and division problems with 1 or 2 digit numbers.
### Multiplication Word Problems Grade 4 Of Multiplication And Division Source:
In this worksheet, students are required to work on multiplication word problems. Fractions and decimal word problems for grade 4. There is still a strong focus on more complex arithmetic such as long division and longer multiplication problems and you will find plenty of math worksheets in this section for those topics. | 655 | 3,329 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.5625 | 3 | CC-MAIN-2022-27 | latest | en | 0.909785 |
https://www.12000.org/my_notes/kamek/mma_12_1_maple_2020/KERNELsubsection10.htm | 1,721,690,453,000,000,000 | text/html | crawl-data/CC-MAIN-2024-30/segments/1720763517927.60/warc/CC-MAIN-20240722220957-20240723010957-00569.warc.gz | 542,391,923 | 3,097 | #### 2.10 ODE No. 10
$y(x) f'(x)-f(x) f'(x)+y'(x)=0$ Mathematica : cpu = 0.0132074 (sec), leaf count = 18
$\left \{\left \{y(x)\to f(x)+c_1 e^{-f(x)}-1\right \}\right \}$ Maple : cpu = 0.011 (sec), leaf count = 15
$\left \{ y \left ( x \right ) =f \left ( x \right ) -1+{{\rm e}^{-f \left ( x \right ) }}{\it \_C1} \right \}$
Hand solution
\begin {equation} \frac {dy}{dx}+y\left ( x\right ) \frac {df}{dx}=f\left ( x\right ) \frac {df}{dx} \tag {1} \end {equation}
Integrating factor $$\mu =e^{\int \frac {df}{dx}dx}=e^{f}$$. Therefore (1) becomes$\frac {d}{dx}\left ( e^{f}y\left ( x\right ) \right ) =e^{f}f\left ( x\right ) \frac {df}{dx}$ Integrating\begin {align*} e^{f}y\left ( x\right ) & =\int e^{f}f\left ( x\right ) \frac {df}{dx}dx+C\\ y\left ( x\right ) & =e^{-f}\int e^{f}fdf+e^{-f}C \end {align*}
But $$\int e^{f}fdf$$ is the same as $$\int e^{x}xdx$$ which by integration by parts gives $$e^{x}\left ( x-1\right )$$ or in terms of $$f$$, gives $$e^{f}\left ( f-1\right )$$. Hence the above becomes\begin {align*} y\left ( x\right ) & =e^{-f}\left ( e^{f}\left ( f-1\right ) \right ) +e^{-f}C\\ & =f-1+e^{-f}C \end {align*} | 501 | 1,149 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2024-30 | latest | en | 0.413371 |
https://wiki.haskell.org/index.php?title=Euler_problems/171_to_180&diff=prev&oldid=18811 | 1,632,231,514,000,000,000 | text/html | crawl-data/CC-MAIN-2021-39/segments/1631780057225.38/warc/CC-MAIN-20210921131252-20210921161252-00712.warc.gz | 676,071,879 | 6,913 | Difference between revisions of "Euler problems/171 to 180"
Problem 171
Finding numbers for which the sum of the squares of the digits is a square.
Solution:
```problem_171 = undefined
```
Problem 172
Investigating numbers with few repeated digits.
Solution:
```problem_172 = undefined
```
Problem 173
Using up to one million tiles how many different "hollow" square laminae can be formed? Solution:
```problem_173=
let c=div (10^6) 4
xm=floor\$sqrt \$fromIntegral c
k=[div c x|x<-[1..xm]]
in sum k-(div (xm*(xm+1)) 2)
```
Problem 174
Counting the number of "hollow" square laminae that can form one, two, three, ... distinct arrangements.
Solution:
```problem_174 = undefined
```
Problem 175
Fractions involving the number of different ways a number can be expressed as a sum of powers of 2. Solution:
```sternTree x 0=[]
sternTree x y=
m:sternTree y n
where
(m,n)=divMod x y
findRat x y
|odd l=take (l-1) k++[last k-1,1]
|otherwise=k
where
k=sternTree x y
l=length k
p175 x y=
init\$foldl (++) "" [a++","|
a<-map show \$reverse \$filter (/=0)\$findRat x y]
problems_175=p175 123456789 987654321
test=p175 13 17
```
Problem 176
Rectangular triangles that share a cathetus. Solution:
```--k=47547
--2*k+1=95095 = 5*7*11*13*19
lst=[5,7,11,13,19]
primes=[2,3,5,7,11]
problem_176 =
product[a^b|(a,b)<-zip primes (reverse n)]
where
la=div (last lst+1) 2
m=map (\x->div x 2)\$init lst
n=m++[la]
```
Problem 177
Solution:
```problem_177 = undefined
```
Problem 178
Step Numbers Solution:
```problem_178 = undefined
```
Problem 179
Consecutive positive divisors. Solution:
```problem_179 = undefined
```
Problem 180
Solution:
```problem_180 = undefined
``` | 547 | 1,683 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.03125 | 4 | CC-MAIN-2021-39 | latest | en | 0.670656 |
https://mijcar.tripod.com/math/ds.html | 1,674,888,420,000,000,000 | text/html | crawl-data/CC-MAIN-2023-06/segments/1674764499524.28/warc/CC-MAIN-20230128054815-20230128084815-00189.warc.gz | 411,535,768 | 3,906 | Mathematics for the Intuitive Learner Drill Sheets
PROBLEMS Story problems, word problems -- whatever they're called, most people fear them! But you don't have to. Ultimately, all applied mathematics is about word problems. And we just call them problems: How much carpet do I need? How far can I drive before I should fill the gas tank? How safe is laser eye surgery? Like all learning, we start with the obvious and move to the complex. Get your style perfect in the simple problems and the hard problems will become much simpler.
MULTIPLICATION, FACTORING AND SOLVING OF VARIOUS POLYNOMIALS Learn the five basic forms of polynomial multiplication first: Distribution Double Distribution Multiplication of Conjugates (Sum & Difference) Squaring Binomials FOIL Next use your facility in multiplying to become adept at factoring various polynomial forms. Then use your factoring ability to become expert at solving a variety of quadratic and polynomial equations.
GRAPHING Graphing is the beginning of the union of Algebra and Geometry. Although most of the graphs can be easily replicated on graphing calculator, we are going to learn how to analyze when in the world of algebraic graphing.
Trigonometry Sine Worksheet | 274 | 1,261 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2023-06 | latest | en | 0.878919 |
https://www.physicsforums.com/threads/what-motor-payload-mass-ratio-payload-spins-the-motor.956896/ | 1,618,959,748,000,000,000 | text/html | crawl-data/CC-MAIN-2021-17/segments/1618039491784.79/warc/CC-MAIN-20210420214346-20210421004346-00601.warc.gz | 1,050,454,638 | 23,325 | • B
Given an unconstrained (suspended without contact to anything) electric motor with its own power in total has mass m1
Given the motor shaft is connected to a payload of mass m2
If m1>m2 then: m1 no-spin + m2 spin
If m2>m1 then: m1 spin + m2 no-spin
What happens at the unit ratio (m1/m2 = 1 or m1=m2)?
Aside from the unit ratio are there any other non-trivial ratios which determine which mass spins the other?
Merlin3189
Homework Helper
Gold Member
Given an unconstrained (suspended without contact to anything) electric motor with its own power in total has mass m1
Given the motor shaft is connected to a payload of mass m2
If m1>m2 then: m1 no-spin + m2 spin No
If m2>m1 then: m1 spin + m2 no-spin No
What happens at the unit ratio (m1/m2 = 1 or m1=m2)? Depends on mass distribution
Aside from the unit ratio are there any other non-trivial ratios which determine which mass spins the other? Think Newtons 3rd law perhaps?
I don't know how you can suspend this, but perhaps it's floating around in space, maybe an astronaut using his battery powered screwdriver or something.
Anyhow, your basic idea is wrong. Both objects spin. Whatever their relative size and mass.
If one is 'bigger' it rotates slower than the 'smaller' one.
Conservation of angular momentum is the key. So 'bigger' and 'smaller' refer to the moment of inertia. That depends on both mass and the way the mass is distributed
If you compare it to the non-rotational situation, think about two masses pushing linearly on each other - eg. a compressed spring between them. Which one moves and which one stays still?
Edit: For this last eg lets ignore relativity and assume we are watching from a fixed reference frame in which they start off stationary.
Last edited:
Ibix, CWatters and berkeman
Yes this is exactly a Newton's Third Law investigation.
The Problem
I want to know given the exact masses of ##m_1## and ##m_2## can I calculate the exact Newton's Third Law force operating on both masses so I can then select ##m_2## to exceed this so ##m_2## itself acts as a constraint (exceeding Newton's Third Law force) needed to hold ##m_2## so ##m_1## spins. In detail I need to exceed the Newton's Third Law force so I must calculate it exactly in terms of ##m_1## and ##m_2##.
Is the calculation below getting me there?
The Preliminaries
Think outer space lightyears away from any large masses so there is identically zero gravity
Consider both ##m_1## and ##m_2## as uniformly distributed masses shaped as cylinders
Hypothesis: the split of motor "spin" between the two is
##M \equiv m_1 + m_2##
##f_1 \equiv m_1/M## is the fraction of total motor "spin" ##m_1## receives
##f_2 \equiv m_2/M## is the fraction of total motor "spin" ##m_2## receives
So for a ratio ##\rho \equiv m2/m1 = 2## then ##f_1=\frac{1}{3}## and ##f_2=\frac{2}{3}##
Results for many values of ##\rho## would be
\begin{matrix}
\rho & f_1 & f_2 \\ \hline
1 & \frac{1}{2} & \frac{1}{2} \\
2 & \frac{1}{3} & \frac{2}{3} \\
3 & \frac{1}{4} & \frac{3}{4} \\
4 & \frac{1}{5} & \frac{4}{5} \\
\end{matrix}
And so on
So now as ##m_1## is fixed and ##m_2## increases, then ##\rho## increases, and ##m_2## receives a larger and larger fraction of the motor "spin" according to this tabulated ratio analysis (while ##m_1## receives a smaller and smaller fraction of the motor spin force).
The Concluding Questions
The Newton's Third Law question is can ##m_2## be selected big enough to exceed the Newton's Third Law force trying to spin it oppositely of the spin of ##m_1##. Therefore, is there an ##m_2## large enough that by itself it will remain at rest while only ##m_1## spins even without any constraint actively constraining ##m_2## from moving, other than its own mass (inertia)?
Seems to me if ##m_2## is "larger the maximum motor force" (the motor has its own torque curve and cannot provide any acceleration or even constant speed above its characteristic limit) then the ##f_2## fraction of that motor force determines the Newton's Third Law force trying to move ##m_2##. If I know the maximum motor force, can I not just pick ##m_2## to just slightly exceed it and thereby eliminate the Newton's Third Law force from operating on ##m_2##?
Further Formalism of Newton's Third Law
If a constraint now holds either ##m_1## or ##m_2## but not both (don't worry about how this is realized in practice) then the constraint would have to be stronger than ##f_i## times the motor force ##F## trying to spin its shaft. So
\begin{align}
F_1 = m_1 a \nonumber \\
F_2 = m_2 a \nonumber
\end{align}
where ##a## could be specified in angles per second squared or
##\omega_\phi = \frac{\phi}{t}##
##a = \frac{\omega_\phi}{t} = \frac{\phi}{t^2}##
So now the motor force on
##m_1## would be ##f_1 F_1##
##m_2## would be ##f_2 F_2##
where ##a## would be the same for both but split by ##f_i## accordingly for each.
Introducing Newton's Third Law ##F_3^i## for the ##i^{th}## mass explicitly then derives:
##F_3^1 = - f_1 F_1##
##F_3^2 = - f_2 F_2##
The constraint would then have to exceed either of these to actually hold one of the cylinders fixed enabling the other to continue to spin
Further Questions
1) Can this be reformulated in terms of Moments of Inertia?
2) Is this ratio analysis above also wrong or is it correct and can be derived from the more accurate Moment of Inertia calculation?
3) how is that Moment of Inertia calculation started?
Ibix
2020 Award
Both masses must start to spin in opposite directions, otherwise the conservation of angular momentum is violated. Full stop (edit: or period. However you call that dot at the end of a sentence in your dialect of English). This applies even if one of the masses is the Earth and the other is a tiny strip of paper (or whatever extreme you wish to carry this to).
The speeds of rotation do vary - the thing that is conserved is the sum of ##I_i\omega_i##, where ##I_i## is the moment of inertia of the ##i##th body and ##\omega_i## is its angular velocity. Bodies that were initially at rest means that the total angular momentum must be zero, and hence for just two bodies ##I_1\omega_1+I_2\omega_2=0##. Thus you can make the angular velocity of one body as small as you like by increasing its moment of inertia - but it can only be zero if the other body does not rotate either.
You can look up moments of inertia for common geometric solids of uniform density via Google easily enough.
Last edited:
CWatters and Merlin3189
CWatters
Homework Helper
Gold Member
The Newton's Third Law question is can ##m_2## be selected big enough to exceed the Newton's Third Law force......
You are asking if a mass (m2) can be bigger than a force (or torque). They are different things. It's like asking if a colour can be smaller than a mouse.
Ibix
Merlin3189
Homework Helper
Gold Member
Wow! You have put a lot of effort into this. I hope it eventually brings you to a better understanding.
You can be quite sure you will not find any way around Newton's 3rd law, conservation of momentum and of angular momentum. This conservation of momentum principle is one of the most fundamental in Physics.
... can I calculate the exact Newton's Third Law force operating on both masses ...
Whether you can actually calculate the value of this force, depends on how you cause it, or on what measurements you can make. The force is exactly the same size and opposite direction on each body. That applies whether it is a linear push or a rotational torque.
so I can then select ##m_2## to exceed this so ##m_2## itself acts as a constraint (exceeding Newton's Third Law force) needed to hold ##m_2## so ##m_1## spins. In detail I need to exceed the Newton's Third Law force so I must calculate it exactly in terms of ##m_1## and ##m_2##.
No way! You will never break Newtons 3rd law.
Hypothesis: the split of motor "spin" between the two is
##M \equiv m_1 + m_2##
##f_1 \equiv m_1/M## is the fraction of total motor "spin" ##m_1## receives
##f_2 \equiv m_2/M## is the fraction of total motor "spin" ##m_2## receives
So for a ratio ##\rho \equiv m2/m1 = 2## then ##f_1=\frac{1}{3}## and ##f_2=\frac{2}{3}##
Good intuition, but not quite right.
Both objects acquire equal but opposite momentum (whether angular or linear.) So if "spin" means angular momentum, then they both get equal amounts irrespective of their mass.
BUT if "spin" means angular speed, then it is shared according to their moments of inertia and hence according to their masses, because the moment of inertia is proportional to mass. In that case, you are near but still no coconut, because it is the opposite way round. The smaller mass object must rotate faster and the larger mass rotate object slower, so that they both have the same size of angular momentum.
## | angular\ momentum_1 | = ω_1 I_1 = ω_1 k m_1 ##
## | angular\ momentum_2 | = ω_2 I_2 = ω_2 k m_2 ##
So ## ω_1 k m_1 = ω_2 k m_2 \ \ or \ ω_1 m_1 = ω_2 m_2##
Then ## \frac {ω_1} {ω_1 + ω_2} = \frac {ω_1} {ω_1 + w_1 \frac{m_1} {m_2}} = \frac {1} {1 + \frac{m_1} {m_2} }= \frac {m_2}{m_2 + m_1}##
BTW - Anyone - How do I stop Latex making the writing so small when the expressions get complicated?
So now as ##m_1## is fixed and ##m_2## increases, then ##\rho## increases, and ##m_2## receives a larger and larger fraction of the motor "spin" according to this tabulated ratio analysis (while ##m_1## receives a smaller and smaller fraction of the motor spin force).
Well, the opposite way round. Notice, one gets a smaller and smaller fraction, but never zero.
I hope that's a bit helpful. I'm giving up on the rest of your detailed work for now. You can find more info on the web about angular momentum, but please don't try to fight conservation of momentum and Newton's laws. That's just a waste of effort.
CWatters
CWatters
Homework Helper
Gold Member
+1
Notice, one gets a smaller and smaller fraction, but never zero.
Which, for example, means that if you spin a child's roundabout you really do cause the earth to rotate in the opposite direction a very small amount. Of course that effect is reversed when you stop the roundabout or allow friction to stop it.
BTW - Anyone - How do I stop Latex making the writing so small when the expressions get complicated?
Just use 1/2 instead of \frac{1}{2} in your denominators. Not the best solution but perhaps an acceptable temporary workaround....
Ibix
2020 Award
Then ## \frac {ω_1} {ω_1 + ω_2} = \frac {ω_1} {ω_1 + w_1 \frac{m_1} {m_2}} = \frac {1} {1 + \frac{m_1} {m_2} }= \frac {m_2}{m_2 + m_1}##
BTW - Anyone - How do I stop Latex making the writing so small when the expressions get complicated?
The font size commands listed at https://texblog.org/2012/08/29/changing-the-font-size-in-latex/ seem to work. Using \Huge:
## \Huge{\frac {1} {1 + \frac{m_1} {m_2} }}##
Ibix
2020 Award
Also using the paragraph maths style (delimited by pairs of $signs) instead of inline (delimited by pairs of # signs) helps. Using #: ##\frac {ω_1} {ω_1 + ω_2} = \frac {ω_1} {ω_1 + w_1 \frac{m_1} {m_2}} = \frac {1} {1 + \frac{m_1} {m_2} }= \frac {m_2}{m_2 + m_1}## Using$:
$$\frac {ω_1} {ω_1 + ω_2} = \frac {ω_1} {ω_1 + w_1 \frac{m_1} {m_2}} = \frac {1} {1 + \frac{m_1} {m_2} }= \frac {m_2}{m_2 + m_1}$$
This is probably the right thing to do. The inline mode is designed to be used for things like "##m_1## is the mass", and tries to control the vertical extent to avoid mucking up line spacing in paragraphs. The paragraph mode just lays out the maths.
Last edited:
Merlin3189
Homework Helper
Gold Member | 3,240 | 11,532 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.53125 | 4 | CC-MAIN-2021-17 | longest | en | 0.920865 |
https://www.enotes.com/homework-help/peaks-triangle-3-5-b-1-3-c-2-2-determine-length-383916 | 1,498,257,082,000,000,000 | text/html | crawl-data/CC-MAIN-2017-26/segments/1498128320201.43/warc/CC-MAIN-20170623220935-20170624000935-00232.warc.gz | 873,252,084 | 12,464 | # peaks of the triangle A (3, -5), B (1, -3), C (2, -2). determine the length of its external angle bisectors of the top of the B
sciencesolve | Teacher | (Level 3) Educator Emeritus
Posted on
You need to remember that the bisector of an exterior angle of a triangle divides externally the opposite side into segments that form a ratio equal to the ratio formed by the other lengths of sides of triangle.
Supposing that BD is the angle bisector of the external angle at B,then `(DA)/(DC) = (AB)/(BC).`
Since the problem provides the coordinates of vertices A,B,C, you may evaluate the lengths AB and BC such that:
`AB = sqrt((x_B - x_A)^2 + (y_B - y_A)^2)`
`AB = sqrt((1 - 3)^2 + (-3 + 5)^2)`
`AB = sqrt(4 + 4) => AB = 2sqrt2`
`BC = sqrt((x_C - x_B)^2 + (y_C - y_B)^2)`
`BC = sqrt((2 - 1)^2 + (-2 + 3)^2)`
`BC = sqrt2`
`(DA)/(DC) = (2sqrt2)/(sqrt2) => (DA)/(DC) = 2`
Hence, evaluating the ratio of the lengths of segments created by the external angle bisector yields (DA)/(DC) = 2. | 318 | 995 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.0625 | 4 | CC-MAIN-2017-26 | longest | en | 0.78883 |
https://mathhelpforum.com/threads/integration-problem-cylindrical-water-tank.221958/ | 1,576,233,719,000,000,000 | text/html | crawl-data/CC-MAIN-2019-51/segments/1575540553486.23/warc/CC-MAIN-20191213094833-20191213122833-00246.warc.gz | 452,533,141 | 15,749 | # Integration problem, cylindrical water tank
#### kyliealana
A cylindrical water tank with radius 2m is installed in such a way that the axis is horizontal and the circular cross sections are vertical. Water is put into the tank so that the depth of the water is 3m. What percentage of the total capacity of the tank is being used? Round off to the nearest percentage point.
I have tried computing the integral of V= (2pix(4-x^2)^.5)dx where I got -(1/3)cos^3x ... I am unsure as to what else I have to do. I am completely lost on this problem.
#### chiro
MHF Helper
Hey kyliealana.
Do you have information on the height of the tank? If what you say is correct, the tank is aligned on the ground where the x and z axes are parallel to the circular cross sections which means that in this case, volume is calculated as pi*r^2*h for some height h.
Can you help us with regards to the height?
#### kyliealana
This is what I have so far and I feel like I am really off track.
Forum Staff | 245 | 994 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.984375 | 3 | CC-MAIN-2019-51 | latest | en | 0.955247 |
https://school.gradeup.co/ex-1.1-q5-use-euclids-division-lemma-to-show-that-the-cube-i-1njgky | 1,571,240,807,000,000,000 | text/html | crawl-data/CC-MAIN-2019-43/segments/1570986668994.39/warc/CC-MAIN-20191016135759-20191016163259-00286.warc.gz | 675,043,567 | 26,082 | # Use Euclid’s division lemma to show that the cube of any positive integer is of the form 9m, 9m + 1 or 9m + 8.
Let a be any positive integer. Then, it is of the form 3q or, 3q + 1 or, 3q + 2.
We know that according to Euclid's division lemma:
a = bq + r So, we have the following cases:
Case I When a = 3q
In this case, we have
a3 = (3q)3 = 27q3 = 9(3q3 ) = 9m, where m = 3q3
Case II When a = 3q + 1
In this case, we have
a3 = (3q + 1)3
27q3 + 27q2 + 9q + 1
9q(3q2 + 3q + 1) + 1
a3 = 9m + 1, where m = q(3q2 + 3q + 1)
Case III When a = 3q + 2
In this case, we have
a3 = (3q + 1)3
27q3 + 54q2 + 36q + 8
9q(3q2 + 6q + 4) + 8
a3 = 9m + 8, where m = q(3q2 + 6q + 4)
Hence, a3 is the form of 9m or, 9m + 1 or, 9m + 8
Rate this question :
How useful is this solution?
We strive to provide quality solutions. Please rate us to serve you better. | 401 | 859 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.28125 | 4 | CC-MAIN-2019-43 | latest | en | 0.748296 |
https://stats.stackexchange.com/questions/269900/ica-independence-of-coefficients-and-maximizing-independence | 1,653,364,228,000,000,000 | text/html | crawl-data/CC-MAIN-2022-21/segments/1652662562410.53/warc/CC-MAIN-20220524014636-20220524044636-00735.warc.gz | 597,691,814 | 66,247 | # ICA - independence of coefficients and maximizing independence
Hopefully this isn't too silly a question but I'm wondering how in independent component analysis when we've got independent coefficients then we identify parts of a face such as eyes, mouth, nose, etc.
What exactly does the independence of the coefficients have to do with this? Isn't it more to do with the independence of the proposed feature vectors?
Also, is the primary difference in PCA and ICA that PCA aims to maximize the variance of it's projections which is given by the directions provided by the eigendecomposition of its covariance matrix and ICA aims to maximize the independence of its feature vectors? But since the the covariance matrix is a symmetrical matrix, aren't all the eigenvectors independent anyway?
How exactly does ICA maximize independence in a way that makes them more independent than the orthogonal basis vectors produced by PCA?
Thanks
how in independent component analysis when we've got independent coefficients then we identify parts of a face such as eyes, mouth, nose, etc.
My interpretation of your question is: Suppose we run ICA on a set of aligned face images, treating pixels as dimensions and images as data points. Why do weight vectors (i.e. basis images) produced by ICA tend to assign high weight to groups of pixels that correspond to facial features--is that what you're asking? The tautological answer would be something like "because that's how the distribution factorizes best, subject to the assumptions of ICA". Loosely, you could say this means that pixels within each facial feature tend to vary together across faces, and to vary independently from those in other facial features. For this to be true, the face images would probably have to be aligned such that the same set of pixels consistently covers the same facial features across images.
Also, is the primary difference in PCA and ICA that PCA aims to maximize the variance of it's projections which is given by the directions provided by the eigendecomposition of its covariance matrix and ICA aims to maximize the independence of its feature vectors?
Your statement about PCA is correct. I'm not sure I understand the precise intended meaning of your statement about ICA, so I'll phrase it another way: ICA tries to maximize the independence of the projections of the data onto each weight/basis vector. Note that this is distinct from independence of the basis vectors themselves. ICA typically assumes that the data are i.i.d., that there's no noise, and that the number of components is equal to the number of input dimensions. PCA doesn't share the last assumption.
But since the the covariance matrix is a symmetrical matrix, aren't all the eigenvectors independent anyway?
If the data has $d$ dimensions and the covariance matrix has full rank, there will be $d$ linearly independent eigenvectors. A set of vectors is linearly independent if it's impossible to write one of the vectors as a linear combination of the others. This is true of the eigenvectors because they're orthogonal. Note that linear independence is a distinct concept from statistical independence, and linear independence of the basis vectors has nothing to do with ICA. Unlike PCA, ICA doesn't require the basis vectors to be orthogonal. Furthermore, it tries to maximize statistical independence of the projections, not the basis vectors.
• Oh man, this was really helpful. I was completely missing the part about statistical independence vs linear independence as it wasn't specifically stated in the notes. Just a follow up: that statistical independence is therefore determined by the correlation coefficients between two points being zero is that right? So in ICA, are we looking for a set of basis vectors that produce approximations where the approximations for point x across basis vector v1 and across basis vector v2 are statistically independent. Mar 26, 2017 at 17:42
• If there were two dimensions/components, then ICA would try to find basis vectors v1 and v2 s.t. the projections of the data onto v1 are maximally independent from the projections onto v2. But, this isn't equivalent to minimizing the correlation coefficient, because zero correlation doesn't imply independence. Ideally, we'd like to maximize the joint entropy. But, that may not be feasible to estimate, so different forms of ICA optimize different surrogate loss functions instead. Mar 26, 2017 at 18:31
• I definitely recommend reading some ICA papers. For example: Hvarinenen and Oja (2000). Independent component analysis: algorithms and applications. Mar 26, 2017 at 18:32
• Okay, but maximally independence implies that cov(v1,v2) = 0 right? So we want a basis of vectors where this is true for all v1,v2,... etc. Mar 26, 2017 at 18:43 | 991 | 4,799 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 2.90625 | 3 | CC-MAIN-2022-21 | longest | en | 0.958423 |
https://de.mathworks.com/matlabcentral/cody/problems/2700-simulate-one-complete-step-in-the-biham-middleton-levine-traffic-model/solutions/1508274 | 1,586,407,393,000,000,000 | text/html | crawl-data/CC-MAIN-2020-16/segments/1585371829677.89/warc/CC-MAIN-20200409024535-20200409055035-00505.warc.gz | 414,175,468 | 15,976 | Cody
# Problem 2700. Simulate one complete step in the Biham–Middleton–Levine traffic model
Solution 1508274
Submitted on 29 Apr 2018 by William
This solution is locked. To view this solution, you need to provide a solution of the same size or smaller.
### Test Suite
Test Status Code Input and Output
1 Pass
a_in = ... [0 0 0 2 1 1 0 0 0 0 2 0 0 0 0 1]; a_out_correct = ... [0 0 0 0 0 1 1 2 0 0 0 0 1 0 2 0]; assert(isequal(traffic_step(a_in),a_out_correct))
2 Pass
a_in = ... [0 0 2 2 0 0]; a_out_correct = ... [2 0 0 0 0 2]; assert(isequal(traffic_step(a_in),a_out_correct))
3 Pass
a_in = ... [1 0 2 2 0 0]; a_out_correct = ... [2 1 0 0 0 2]; assert(isequal(traffic_step(a_in),a_out_correct))
4 Pass
a_in = ... [0 0 2 1 1 1 2 0 0]; a_out_correct = ... [2 0 2 1 1 1 0 0 0]; assert(isequal(traffic_step(a_in),a_out_correct))
5 Pass
a_in = ... [0 2 2 2 0 0 1 1 0 2 0 0 0 0 0 0 2 0 1 1 0 1 1 2 0 0 1 2 0 0 0 0 0 2 0 1]; a_out_correct = ... [0 2 2 2 0 0 0 1 1 2 0 0 0 0 0 2 2 0 0 1 1 1 1 0 0 0 1 0 0 2 1 0 0 2 0 0]; assert(isequal(traffic_step(a_in),a_out_correct))
6 Pass
a_in = ... [0 1 1 1 0 0 0 0]; a_out_correct = ... [1 0 1 1 0 0 0 0]; assert(isequal(traffic_step(a_in),a_out_correct))
7 Pass
a_in = ... [0 2 2]; a_out_correct = ... [2 0 2]; assert(isequal(traffic_step(a_in),a_out_correct)) | 618 | 1,320 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.109375 | 3 | CC-MAIN-2020-16 | latest | en | 0.401267 |
https://www.mathworks.com/matlabcentral/profile/authors/6958530 | 1,627,473,217,000,000,000 | text/html | crawl-data/CC-MAIN-2021-31/segments/1627046153709.26/warc/CC-MAIN-20210728092200-20210728122200-00607.warc.gz | 917,600,689 | 20,787 | Community Profile
# Marius Mueller
Last seen: 2 days ago Active since 2015
All
#### Content Feed
View by
Can't read excel file using readtable and detectImportOptions after compiling the App. It works perfectly fine in the test environment
I have the exact same issue. Everything was peachy in R2020b but once i switched to R2021a this error occurs. I could not fi...
7 days ago | 0
Solved
Project Euler: Problem 2, Sum of even Fibonacci
Each new term in the Fibonacci sequence is generated by adding the previous two terms. By starting with 1 and 2, the first 10 te...
10 months ago
Solved
Project Euler: Problem 1, Multiples of 3 and 5
If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23...
10 months ago
Solved
Who Has the Most Change?
You have a matrix for which each row is a person and the columns represent the number of quarters, nickels, dimes, and pennies t...
1 year ago
Solved
Pizza!
Given a circular pizza with radius _z_ and thickness _a_, return the pizza's volume. [ _z_ is first input argument.] Non-scor...
1 year ago
Solved
Read a column of numbers and interpolate missing data
Given an input cell array of strings s, pick out the second column and turn it into a row vector of data. Missing data will be i...
1 year ago
Solved
Interpolator
You have a two vectors, a and b. They are monotonic and the same length. Given a value, va, where va is between a(1) and a(end...
1 year ago
Solved
Create times-tables
At one time or another, we all had to memorize boring times tables. 5 times 5 is 25. 5 times 6 is 30. 12 times 12 is way more th...
1 year ago
Solved
Sum all integers from 1 to 2^n
Given the number x, y must be the summation of all integers from 1 to 2^x. For instance if x=2 then y must be 1+2+3+4=10.
1 year ago
Solved
Magic is simple (for beginners)
Determine for a magic square of order n, the magic sum m. For example m=15 for a magic square of order 3.
1 year ago
Solved
Make a random, non-repeating vector.
This is a basic MATLAB operation. It is for instructional purposes. --- If you want to get a random permutation of integer...
1 year ago
Solved
Roll the Dice!
*Description* Return two random integers between 1 and 6, inclusive, to simulate rolling 2 dice. *Example* [x1,x2] =...
1 year ago
Solved
Number of 1s in a binary string
Find the number of 1s in the given binary string. Example. If the input string is '1100101', the output is 4. If the input stri...
1 year ago
Solved
Return the first and last character of a string
Return the first and last character of a string, concatenated together. If there is only one character in the string, the functi...
1 year ago
Solved
Getting the indices from a vector
This is a basic MATLAB operation. It is for instructional purposes. --- You may already know how to <http://www.mathworks....
1 year ago
Solved
Determine whether a vector is monotonically increasing
Return true if the elements of the input vector increase monotonically (i.e. each element is larger than the previous). Return f...
1 year ago
Solved
Check if number exists in vector
Return 1 if number _a_ exists in vector _b_ otherwise return 0. a = 3; b = [1,2,4]; Returns 0. a = 3; b = [1,...
1 year ago
Solved
Swap the first and last columns
Flip the outermost columns of matrix A, so that the first column becomes the last and the last column becomes the first. All oth...
1 year ago
Solved
Swap the input arguments
Write a two-input, two-output function that swaps its two input arguments. For example: [q,r] = swap(5,10) returns q = ...
1 year ago
Solved
Column Removal
Remove the nth column from input matrix A and return the resulting matrix in output B. So if A = [1 2 3; 4 5 6]; ...
1 year ago
Solved
Reverse the vector
Reverse the vector elements. Example: Input x = [1,2,3,4,5,6,7,8,9] Output y = [9,8,7,6,5,4,3,2,1]
1 year ago
Solved
Length of the hypotenuse
Given short sides of lengths a and b, calculate the length c of the hypotenuse of the right-angled triangle. <<http://upload....
1 year ago
Solved
Triangle Numbers
Triangle numbers are the sums of successive integers. So 6 is a triangle number because 6 = 1 + 2 + 3 which can be displa...
1 year ago
Solved
Generate a vector like 1,2,2,3,3,3,4,4,4,4
Generate a vector like 1,2,2,3,3,3,4,4,4,4 So if n = 3, then return [1 2 2 3 3 3] And if n = 5, then return [1 2 2...
1 year ago
Solved
Finding Perfect Squares
Given a vector of numbers, return true if one of the numbers is a square of one of the other numbers. Otherwise return false. E...
1 year ago
Solved
Return area of square
Side of square=input=a Area=output=b
1 year ago
Solved
Maximum value in a matrix
Find the maximum value in the given matrix. For example, if A = [1 2 3; 4 7 8; 0 9 1]; then the answer is 9.
1 year ago
Solved
Select every other element of a vector
Write a function which returns every other element of the vector passed in. That is, it returns the all odd-numbered elements, s...
1 year ago
Solved
Determine if input is odd
Given the input n, return true if n is odd or false if n is even.
1 year ago
Solved | 1,458 | 5,180 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.078125 | 3 | CC-MAIN-2021-31 | latest | en | 0.828335 |
http://www.weegy.com/?ConversationId=36219EA4 | 1,529,454,322,000,000,000 | text/html | crawl-data/CC-MAIN-2018-26/segments/1529267863259.12/warc/CC-MAIN-20180619232009-20180620012009-00476.warc.gz | 532,275,532 | 8,074 | You have new items in your feed. Click to view.
Q: Solve, if possible, using the substitution method 2x + y = -10 3x – 5y = 11 What is the x-coordinate?
A: 2x + y = -10 3x – 5y = 11; y = -2x - 10; 3x – 5(-2x - 10) = 11; 3x + 10x + 50 = 11; 13x + 50 = 11; 13x = 11 - 50; 13x = -39; x = -39/13; x = -3; 2(-3) + y = -10; -6 + y = -10; y = -10 + 6; y = -4; The x-coordinate is -3.
Question
Updated 3/21/2015 10:14:03 AM
Rating
3
2x + y = -10
3x – 5y = 11;
y = -2x - 10;
3x – 5(-2x - 10) = 11;
3x + 10x + 50 = 11;
13x + 50 = 11;
13x = 11 - 50;
13x = -39;
x = -39/13;
x = -3;
2(-3) + y = -10;
-6 + y = -10;
y = -10 + 6;
y = -4;
The x-coordinate is -3.
*
Get answers from Weegy and a team of really smart lives experts.
Popular Conversations
True or false? Stress has an effect on every system of the body.
Weegy: what is the question?
What's 3 * 4
Weegy: Synonyms for the word "say" which are bigger in length and in impact, are communicate, ...
S
L
Points 247 [Total 265] Ratings 0 Comments 177 Invitations 7 Online
S
L
Points 130 [Total 130] Ratings 0 Comments 130 Invitations 0 Offline
S
L
R
Points 115 [Total 266] Ratings 1 Comments 5 Invitations 10 Offline
S
R
L
R
P
R
P
R
Points 66 [Total 734] Ratings 0 Comments 6 Invitations 6 Offline
S
1
L
L
P
R
P
L
P
P
R
P
R
P
R
P
P
Points 62 [Total 13329] Ratings 0 Comments 62 Invitations 0 Offline
S
L
1
R
Points 34 [Total 1450] Ratings 2 Comments 14 Invitations 0 Offline
S
Points 20 [Total 20] Ratings 1 Comments 0 Invitations 1 Offline
S
L
Points 10 [Total 187] Ratings 0 Comments 0 Invitations 1 Offline
S
Points 10 [Total 13] Ratings 0 Comments 10 Invitations 0 Offline
S
Points 10 [Total 10] Ratings 0 Comments 0 Invitations 1 Offline
* Excludes moderators and previous
winners (Include)
Home | Contact | Blog | About | Terms | Privacy | © Purple Inc. | 728 | 1,800 | {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 3.515625 | 4 | CC-MAIN-2018-26 | latest | en | 0.79631 |
http://aboutscience.net/projectile-motion-examples/ | 1,518,901,797,000,000,000 | text/html | crawl-data/CC-MAIN-2018-09/segments/1518891807825.38/warc/CC-MAIN-20180217204928-20180217224928-00699.warc.gz | 6,664,893 | 9,094 | PROJECTILE MOTION EXAMPLES
Projectile Motion Examples (1 of 2): Projectile Motion at 30°, 45° and 60° Launch Angles
In the first projectile motion example, a steel ball is going to be launched with a projectile launcher at various launch angles including 30, 45 and 60 degrees. What are the flight distance predictions for each of these launch angles?
Given: Before starting calculation of predicted values, a measurement has been done with the projectile launcher. A steel ball has been launched at 90 degrees (vertically up), maximum height that the ball reaches has been measured as hmax= 3.07 m. The uncertainty of the height measurement is +/- 0.15 m and this is approximately 5% error.
Solution: By using measurement result of 90 degrees launch angle, V02 can be calculated with the following way.
$$For\quad \alpha ={ 90 }^{ \circ }\rightarrow { h }_{ max }=\frac { { { V }_{ 0 } }^{ 2 } }{ 2g }$$ (Eq-1)
$${ h }_{ max }=3.07\pm 0.15\quad m$$ (Measurement Result)
If the measurement result is substituted into Eq-1, we get
$${ V }_{ 0 }^{ 2 }=60.2\pm 3{ { \quad m }^{ 2 } }/{ { s }^{ 2 } }$$.
The flight distance for different launch angles can be predicted by using following formula.
$$Δx=\frac { 2{ { V }_{ 0 } }^{ 2 }\cos { \alpha } \sin { \alpha } }{ g } =\frac { { { V }_{ 0 } }^{ 2 }\sin { 2\alpha } }{ g }$$ (Eq-2)
Since flight distance is proportional to V02, it has exactly same uncertainty value with V02, and consequently h.
The projectile launcher can be set to a launch angle with the accuracy of ± 1deg. In other words, the launch angle α has ±1deg uncertainty. Since flight distance is proportional to Sin2α, it’s needed to calculate Sin2(α± 1) for each launch angle case.
Launch Angle [α]Sin2(α+1)sin2αsin2(α-1) Max. Error (%)
30°Sin(62°) = 0.8829Sin(60°) = 0.8660Sin(58°) = 0.84802
45°Sin (92°) = 0.9994Sin(90°) = 1Sin(88°) = 0.99940.06
60°Sin (122°) = 0.8480Sin(120°) = 0.8660Sin(118°) = 0.88292
For 45° launch angle, angle adjustment error is 0.06% and it’s very small compared to 5% velocity error so it can be neglected. For 30° and 60° launch angles, the angle error is approximately 2 % so it shall be taken into account in the calculations.
The predictions for the range for different launch angles are as follows.
• Range prediction for 30° : Δx= (60.2*0.8660)/(2*9,81) ± 7% = 5.31± 0.37 m
• Range prediction for 45° : Δx= (60.2*1)/(2*9,81) ± 5% = 6.14± 0.31 m
• Range prediction for 60° : Δx= (60.2*0.8660)/(2*9,81) ± 7% = 5.31± 0.37 m
The projectile motions for 30°, 45° and 60° launch angles are shown below. The initial velocity is same for all the cases. As seen from the animation, the projectile motion with 60° launch angle has the highest and 30° launch angle has the lowest peak height before falling.
The steel ball launched at the 30° angle reached the ground first because its vertical velocity is the lowest (V0Sin30). The steel ball launched at the 60° angle reached the ground last since its vertical velocity is the highest (V0Sin60). As described above that launch cases of 30° and 60° angles travel same distances, the motion with 60° launch angle will have intuitively much longer flight time.
Projectile Motion Examples (2 of 2): Projectile Motion and Free Fall
A man aims his tennis ball at an apple held by a second man who stands on a hill a distance away d as shown below. At the instant the tennis ball is thrown, the second man releases the apple to cause the first man to fail to hit the apple. Show that the second man made the wrong move. The effect of air resistance is ignored.
Solution: The 1st trajectory shown in the figure would be the one without the gravity and 2nd trajectory with gravity. At a certain moment t1, let’s assume that the tennis ball is at A in 1st trajectory. If there is gravity, at the same moment t1 the ball must be at B because xt1 is same for both trajectories since the horizontal velocities are same. The position in x direction is independent whether there is gravity or not.
Let’s now check the position equation in y direction.
$${ y }_{ t }={ (V }_{ 0 }\sin { \alpha } )t-\frac { 1 }{ 2 } g{ t }^{ 2 }$$
If there is no gravity, second term in the above equation doesn’t exist. Therefore this distance between point A and B is 0.5gt12.
Now let’s check the movement of the apple. At time t=0, the apple was released, so in t1 seconds it will have fall down exactly the distance 0.5gt12. So this means the apple and tennis ball will be exactly at same location after t1 seconds and the apple will be hit. If the man had not released the apple, it wouldn’t have been hit. | 1,330 | 4,604 | {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0} | 4.40625 | 4 | CC-MAIN-2018-09 | latest | en | 0.785795 |