idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
701
What does 1x1 convolution mean in a neural network?
The main reason I didn't understand 1x1 convolutions is because I didn't understand how $any$ convolutions really worked—the key factor is how computing a convolution of multiple channels/filters works. To understand this, I found this answer useful as well: https://datascience.stackexchange.com/questions/9175/how-do-subsequent-convolution-layers-work In particular, Type 2.2 is the correct description of a convolution there. Another helpful answer: https://ai.stackexchange.com/questions/5769/in-a-cnn-does-each-new-filter-have-different-weights-for-each-input-channel-or This answer explains how you have a separate filter for each in/out channel combination. After calculating each of these, the results get summed over the input channel axis leaving with output channel number of values. Here's a video I found which helped me understand how a 1x1 convolution works. https://www.coursera.org/lecture/convolutional-neural-networks/networks-in-networks-and-1x1-convolutions-ZTb8x Here are the main things I got out of it: The input to a 1x1 convolution is usually previous convolutions which have size $m$ x $n$. But if there were $f_1$ filters in the last layer of convolutions, you're getting a $(m, n, f_1)$ shaped matrix. A 1x1 convolution is actually a vector of size $f_1$ which convolves across the whole image, creating one $m$ x $n$ output filter. If you have $f_2$ 1x1 convolutions, then the output of all of the 1x1 convolutions is size $(m, n, f_2)$. So a 1x1 convolution, assuming $f_2 < f_1$, can be seen as rerepresenting $f_1$ filters via $f_2$ filters. It lets the network train how to reduce the dimension most efficiently.
What does 1x1 convolution mean in a neural network?
The main reason I didn't understand 1x1 convolutions is because I didn't understand how $any$ convolutions really worked—the key factor is how computing a convolution of multiple channels/filters work
What does 1x1 convolution mean in a neural network? The main reason I didn't understand 1x1 convolutions is because I didn't understand how $any$ convolutions really worked—the key factor is how computing a convolution of multiple channels/filters works. To understand this, I found this answer useful as well: https://datascience.stackexchange.com/questions/9175/how-do-subsequent-convolution-layers-work In particular, Type 2.2 is the correct description of a convolution there. Another helpful answer: https://ai.stackexchange.com/questions/5769/in-a-cnn-does-each-new-filter-have-different-weights-for-each-input-channel-or This answer explains how you have a separate filter for each in/out channel combination. After calculating each of these, the results get summed over the input channel axis leaving with output channel number of values. Here's a video I found which helped me understand how a 1x1 convolution works. https://www.coursera.org/lecture/convolutional-neural-networks/networks-in-networks-and-1x1-convolutions-ZTb8x Here are the main things I got out of it: The input to a 1x1 convolution is usually previous convolutions which have size $m$ x $n$. But if there were $f_1$ filters in the last layer of convolutions, you're getting a $(m, n, f_1)$ shaped matrix. A 1x1 convolution is actually a vector of size $f_1$ which convolves across the whole image, creating one $m$ x $n$ output filter. If you have $f_2$ 1x1 convolutions, then the output of all of the 1x1 convolutions is size $(m, n, f_2)$. So a 1x1 convolution, assuming $f_2 < f_1$, can be seen as rerepresenting $f_1$ filters via $f_2$ filters. It lets the network train how to reduce the dimension most efficiently.
What does 1x1 convolution mean in a neural network? The main reason I didn't understand 1x1 convolutions is because I didn't understand how $any$ convolutions really worked—the key factor is how computing a convolution of multiple channels/filters work
702
What does 1x1 convolution mean in a neural network?
I will try to explain more intuitively and in-short with illustrations! A 1*1 conv (a.k.a Network-in-Network)? let's say you input is ($n_H$,$n_W$, $n_{c_{prev}}$). You can think of (1*1*$n_{c_{prev}}$) as a single neuron(a Fully Connected network-i.e why Network-in-Network) that's taking in ($n_{c_{prev}}$) numbers in the input multiplying them with (1*1*$n_{c_{prev}}$) and then adding and then taking ReLu & output ($n_H$, $n_W$) and if you have multiple filters($n_c$) then output would be ($n_H$, $n_W$, $n_C$). So you can use pooling layer to reduce spacial dimensions($n_H$,$n_W$) and can use 1*1 conv to reduce the $n_{c_{prev}}$(i.e the number of channels) which saves a lot of computations. For example Therefore,a take away You can use a 1x1 convolutional layer to reduce $n_C$ but not $n_H$, $n_W$. You can use a pooling layer to reduce $n_H$, $n_W$, and $n_C$. In other words, what you are doing using 1*1 CONV filter is? you basically take "1*1*num_input_channel_of_featureMap" size weights & convolve(elementwise * followed by +) this volume over image/featureMap of size "WHnum_input_channel_of_featureMap" and what you get is an output of size "W*H". Now, you can use "#filters" number of such kind "1*1*num_input_channel_of_featureMap" and get the volume "WH#filter" as the final output! More, precisely, you are multiplying "1*1" such 32 different weights (1*1 Volume) with one slice at a same position in the input feature map followed by ReLu & get a single corresponding number generated, seen as green color pixel! 1*1 CONV helps to shrink the number of channels & save on computations in some networks(INCEPTIONS).! But, of course, if you wanna keep the number of channels to same as of input feature map, that's fine too, the one thing 1*1 CONV does is simple apply ReLU Non-linearity to it which allows it to learn complex functions in NN.
What does 1x1 convolution mean in a neural network?
I will try to explain more intuitively and in-short with illustrations! A 1*1 conv (a.k.a Network-in-Network)? let's say you input is ($n_H$,$n_W$, $n_{c_{prev}}$). You can think of (1*1*$n_{c_{prev}
What does 1x1 convolution mean in a neural network? I will try to explain more intuitively and in-short with illustrations! A 1*1 conv (a.k.a Network-in-Network)? let's say you input is ($n_H$,$n_W$, $n_{c_{prev}}$). You can think of (1*1*$n_{c_{prev}}$) as a single neuron(a Fully Connected network-i.e why Network-in-Network) that's taking in ($n_{c_{prev}}$) numbers in the input multiplying them with (1*1*$n_{c_{prev}}$) and then adding and then taking ReLu & output ($n_H$, $n_W$) and if you have multiple filters($n_c$) then output would be ($n_H$, $n_W$, $n_C$). So you can use pooling layer to reduce spacial dimensions($n_H$,$n_W$) and can use 1*1 conv to reduce the $n_{c_{prev}}$(i.e the number of channels) which saves a lot of computations. For example Therefore,a take away You can use a 1x1 convolutional layer to reduce $n_C$ but not $n_H$, $n_W$. You can use a pooling layer to reduce $n_H$, $n_W$, and $n_C$. In other words, what you are doing using 1*1 CONV filter is? you basically take "1*1*num_input_channel_of_featureMap" size weights & convolve(elementwise * followed by +) this volume over image/featureMap of size "WHnum_input_channel_of_featureMap" and what you get is an output of size "W*H". Now, you can use "#filters" number of such kind "1*1*num_input_channel_of_featureMap" and get the volume "WH#filter" as the final output! More, precisely, you are multiplying "1*1" such 32 different weights (1*1 Volume) with one slice at a same position in the input feature map followed by ReLu & get a single corresponding number generated, seen as green color pixel! 1*1 CONV helps to shrink the number of channels & save on computations in some networks(INCEPTIONS).! But, of course, if you wanna keep the number of channels to same as of input feature map, that's fine too, the one thing 1*1 CONV does is simple apply ReLU Non-linearity to it which allows it to learn complex functions in NN.
What does 1x1 convolution mean in a neural network? I will try to explain more intuitively and in-short with illustrations! A 1*1 conv (a.k.a Network-in-Network)? let's say you input is ($n_H$,$n_W$, $n_{c_{prev}}$). You can think of (1*1*$n_{c_{prev}
703
What does 1x1 convolution mean in a neural network?
In Machine Learning terminology, data often has more dimensions than is typically described e.g. 2d image data is normally actually 3d, with dimensions: $w$ the width of the image $h$ the height of the image $k$ (of size $3$) the RGB channels of the image (in a greyscale image $|k|=1$). When describing $1\times1$ convolutional layers, this usually implicitly means $1\times1\times k$, where $k$ is the number of channels - i.e. the filter reduces dimensionality across channels (e.g. averaging the value across the three separate RGB channels).
What does 1x1 convolution mean in a neural network?
In Machine Learning terminology, data often has more dimensions than is typically described e.g. 2d image data is normally actually 3d, with dimensions: $w$ the width of the image $h$ the height of t
What does 1x1 convolution mean in a neural network? In Machine Learning terminology, data often has more dimensions than is typically described e.g. 2d image data is normally actually 3d, with dimensions: $w$ the width of the image $h$ the height of the image $k$ (of size $3$) the RGB channels of the image (in a greyscale image $|k|=1$). When describing $1\times1$ convolutional layers, this usually implicitly means $1\times1\times k$, where $k$ is the number of channels - i.e. the filter reduces dimensionality across channels (e.g. averaging the value across the three separate RGB channels).
What does 1x1 convolution mean in a neural network? In Machine Learning terminology, data often has more dimensions than is typically described e.g. 2d image data is normally actually 3d, with dimensions: $w$ the width of the image $h$ the height of t
704
What does 1x1 convolution mean in a neural network?
One more idea about dimensionality reduction in the context of 1x1 filters: Take for example an 4096x8x8 fc7 layer from FCN. What happens if the next layer (call it fc8) is 2048x8x8 with filter size 1? fc7 is very deep inside the network, each of its 4096 features is semantically rich, but each neuron (e.g. input image is 250x250x3) has a large receptive field. In other words, if a neuron is very active, we know that somewhere in its semantic field there's a corresponding feature present. Take for example a left-uppermost neuron in fc8 with a 1x1 filter. It connects to all 4096 neurons/features only in the same receptive field (upper-left corner of the image), each of which is activated by a single feature. Some (let's same 500) of them are very active. If the resulting neuron is also very active, it means it probably learnt to identify 1 or more features in this receptive field. After you've done this 2048 times for left-uppermost neurons in fc8, quite a few of them (e.g. 250) will be very active, meaning they 'collected' features from the same receptive field through fc7, and many very likely more than one. If you keep reducing the dimensionality, a decreasing number of neurons will be learning an increasing number of features from the same receptive field. And since spatial parameters 8x8 remain the same, we do not change the 'view' of each neuron, thus do not decrease the spatial coarseness. You may want to have a look at 'Fully Convolutional Networks' by Long, Shelhamer and Darrel.
What does 1x1 convolution mean in a neural network?
One more idea about dimensionality reduction in the context of 1x1 filters: Take for example an 4096x8x8 fc7 layer from FCN. What happens if the next layer (call it fc8) is 2048x8x8 with filter size 1
What does 1x1 convolution mean in a neural network? One more idea about dimensionality reduction in the context of 1x1 filters: Take for example an 4096x8x8 fc7 layer from FCN. What happens if the next layer (call it fc8) is 2048x8x8 with filter size 1? fc7 is very deep inside the network, each of its 4096 features is semantically rich, but each neuron (e.g. input image is 250x250x3) has a large receptive field. In other words, if a neuron is very active, we know that somewhere in its semantic field there's a corresponding feature present. Take for example a left-uppermost neuron in fc8 with a 1x1 filter. It connects to all 4096 neurons/features only in the same receptive field (upper-left corner of the image), each of which is activated by a single feature. Some (let's same 500) of them are very active. If the resulting neuron is also very active, it means it probably learnt to identify 1 or more features in this receptive field. After you've done this 2048 times for left-uppermost neurons in fc8, quite a few of them (e.g. 250) will be very active, meaning they 'collected' features from the same receptive field through fc7, and many very likely more than one. If you keep reducing the dimensionality, a decreasing number of neurons will be learning an increasing number of features from the same receptive field. And since spatial parameters 8x8 remain the same, we do not change the 'view' of each neuron, thus do not decrease the spatial coarseness. You may want to have a look at 'Fully Convolutional Networks' by Long, Shelhamer and Darrel.
What does 1x1 convolution mean in a neural network? One more idea about dimensionality reduction in the context of 1x1 filters: Take for example an 4096x8x8 fc7 layer from FCN. What happens if the next layer (call it fc8) is 2048x8x8 with filter size 1
705
What does 1x1 convolution mean in a neural network?
The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. In a 2-dimensional (gray-level) image, a convolution is performed by a sliding-window operation, where the window (the 2-d convolution kernel) is a $v \times v$ matrix. Image-processing applications of neural networks - including convolutional neural networks - have been reviewed in: [M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks - a review, Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002].
What does 1x1 convolution mean in a neural network?
The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. In a 2-dimensional (gray-level) image, a convolutio
What does 1x1 convolution mean in a neural network? The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. In a 2-dimensional (gray-level) image, a convolution is performed by a sliding-window operation, where the window (the 2-d convolution kernel) is a $v \times v$ matrix. Image-processing applications of neural networks - including convolutional neural networks - have been reviewed in: [M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks - a review, Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002].
What does 1x1 convolution mean in a neural network? The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. In a 2-dimensional (gray-level) image, a convolutio
706
What does 1x1 convolution mean in a neural network?
3x3 vs 1x1 Convolution import torch import torch.nn as nn image = torch.randn(1, 3, 1280, 1920) # 3x3 convolution + padding, which keeps the spatial dimensions constant model = nn.Conv2d(in_channels=3, out_channels=2, kernel_size=3, padding=1) output = model(image) print(output.shape) # torch.Size([1, 2, 1280, 1920]) num_elements = sum(p.numel() for p in model.parameters() if p.requires_grad) print(num_elements) # 56 == weights + bias # weights = in_channels * out_channels * kernel_size * kernel_size # => 3 * 2 * 3 * 3 = 54 # bias = out_channels => 2 ### # 1x1 convolution model = nn.Conv2d(in_channels=3, out_channels=2, kernel_size=1, padding=0) output = model(image) print(output.shape) # torch.Size([1, 2, 1280, 1920]) num_elements = sum(p.numel() for p in model.parameters() if p.requires_grad) print(num_elements) # 8 == weights + bias # weights = in_channels * out_channels * kernel_size * kernel_size # => 3 * 2 * 1 * 1 = 6 # bias = out_channels => 2 # 8 << 56 :)
What does 1x1 convolution mean in a neural network?
3x3 vs 1x1 Convolution import torch import torch.nn as nn image = torch.randn(1, 3, 1280, 1920) # 3x3 convolution + padding, which keeps the spatial dimensions constant model = nn.Conv2d(in_channels
What does 1x1 convolution mean in a neural network? 3x3 vs 1x1 Convolution import torch import torch.nn as nn image = torch.randn(1, 3, 1280, 1920) # 3x3 convolution + padding, which keeps the spatial dimensions constant model = nn.Conv2d(in_channels=3, out_channels=2, kernel_size=3, padding=1) output = model(image) print(output.shape) # torch.Size([1, 2, 1280, 1920]) num_elements = sum(p.numel() for p in model.parameters() if p.requires_grad) print(num_elements) # 56 == weights + bias # weights = in_channels * out_channels * kernel_size * kernel_size # => 3 * 2 * 3 * 3 = 54 # bias = out_channels => 2 ### # 1x1 convolution model = nn.Conv2d(in_channels=3, out_channels=2, kernel_size=1, padding=0) output = model(image) print(output.shape) # torch.Size([1, 2, 1280, 1920]) num_elements = sum(p.numel() for p in model.parameters() if p.requires_grad) print(num_elements) # 8 == weights + bias # weights = in_channels * out_channels * kernel_size * kernel_size # => 3 * 2 * 1 * 1 = 6 # bias = out_channels => 2 # 8 << 56 :)
What does 1x1 convolution mean in a neural network? 3x3 vs 1x1 Convolution import torch import torch.nn as nn image = torch.randn(1, 3, 1280, 1920) # 3x3 convolution + padding, which keeps the spatial dimensions constant model = nn.Conv2d(in_channels
707
What is a data scientist?
There are a few humorous definitions which were not yet given: Data Scientist: Someone who does statistics on a Mac. I like this one, as it plays nicely on the more-hype-than-substance angle. Data Scientist: A Statistician who lives in San Francisco. Similarly, this riffs on the West Coast flavour of all this. Personally, I find the discussion (in general, and here) somewhat boring and repetitive. When I was thinking about what I wanted to---maybe a quarter century or longer ago---I aimed for quantitative analyst. That is still what I do (and love!) and it mostly overlaps and covers what was given here in various answers. (Note: There is an older source for quote two but I can't find it right now.)
What is a data scientist?
There are a few humorous definitions which were not yet given: Data Scientist: Someone who does statistics on a Mac. I like this one, as it plays nicely on the more-hype-than-substance angle. Data
What is a data scientist? There are a few humorous definitions which were not yet given: Data Scientist: Someone who does statistics on a Mac. I like this one, as it plays nicely on the more-hype-than-substance angle. Data Scientist: A Statistician who lives in San Francisco. Similarly, this riffs on the West Coast flavour of all this. Personally, I find the discussion (in general, and here) somewhat boring and repetitive. When I was thinking about what I wanted to---maybe a quarter century or longer ago---I aimed for quantitative analyst. That is still what I do (and love!) and it mostly overlaps and covers what was given here in various answers. (Note: There is an older source for quote two but I can't find it right now.)
What is a data scientist? There are a few humorous definitions which were not yet given: Data Scientist: Someone who does statistics on a Mac. I like this one, as it plays nicely on the more-hype-than-substance angle. Data
708
What is a data scientist?
People define Data Science differently, but I think that the common part is: practical knowledge how to deal with data, practical programming skills. Contrary to its name, it's rarely "science". That is, in data science the emphasis is on practical results (like in engineering), not proofs, mathematical purity or rigor characteristic to academic science. Things need to work, and there is little difference if it is based on an academic paper, usage of an existing library, your own code or an impromptu hack. Statistician is not necessary a programmer (may use pen & paper and a dedicated software). Also, some job calls in data science have nothing to do with statistics. E.g. it's data engineering like processing big data, even if the most advanced maths there may be calculating average (personally I wouldn't call this activity "data science", though). Moreover, "data science" is hyped, so tangentially related jobs use this title - to lure the applicants or raise ego of the current workers. I like the taxonomy from Michael Hochster's answer on Quora: Type A Data Scientist: The A is for Analysis. This type is primarily concerned with making sense of data or working with it in a fairly static way. The Type A Data Scientist is very similar to a statistician (and may be one) but knows all the practical details of working with data that aren’t taught in the statistics curriculum: data cleaning, methods for dealing with very large data sets, visualization, deep knowledge of a particular domain, writing well about data, and so on. Type B Data Scientist: The B is for Building. Type B Data Scientists share some statistical background with Type A, but they are also very strong coders and may be trained software engineers. The Type B Data Scientist is mainly interested in using data “in production.” They build models which interact with users, often serving recommendations (products, people you may know, ads, movies, search results). In that sense, Type A Data Scientist is a statistician who can program. But, even for quantitive part, there may be people with background more in computer science (e.g. machine learning) than regular statistics, or ones focusing e.g. on data visualization. And The Data Science Venn Diagram (here: hacking ~ programming): see also alternative Venn diagrams (this and that). Or even a tweet, while humorous, showing a balanced list of typical skills and activities of a data scientist: See also this post: Data scientist - statistician, programmer, consultant and visualizer?.
What is a data scientist?
People define Data Science differently, but I think that the common part is: practical knowledge how to deal with data, practical programming skills. Contrary to its name, it's rarely "science". Tha
What is a data scientist? People define Data Science differently, but I think that the common part is: practical knowledge how to deal with data, practical programming skills. Contrary to its name, it's rarely "science". That is, in data science the emphasis is on practical results (like in engineering), not proofs, mathematical purity or rigor characteristic to academic science. Things need to work, and there is little difference if it is based on an academic paper, usage of an existing library, your own code or an impromptu hack. Statistician is not necessary a programmer (may use pen & paper and a dedicated software). Also, some job calls in data science have nothing to do with statistics. E.g. it's data engineering like processing big data, even if the most advanced maths there may be calculating average (personally I wouldn't call this activity "data science", though). Moreover, "data science" is hyped, so tangentially related jobs use this title - to lure the applicants or raise ego of the current workers. I like the taxonomy from Michael Hochster's answer on Quora: Type A Data Scientist: The A is for Analysis. This type is primarily concerned with making sense of data or working with it in a fairly static way. The Type A Data Scientist is very similar to a statistician (and may be one) but knows all the practical details of working with data that aren’t taught in the statistics curriculum: data cleaning, methods for dealing with very large data sets, visualization, deep knowledge of a particular domain, writing well about data, and so on. Type B Data Scientist: The B is for Building. Type B Data Scientists share some statistical background with Type A, but they are also very strong coders and may be trained software engineers. The Type B Data Scientist is mainly interested in using data “in production.” They build models which interact with users, often serving recommendations (products, people you may know, ads, movies, search results). In that sense, Type A Data Scientist is a statistician who can program. But, even for quantitive part, there may be people with background more in computer science (e.g. machine learning) than regular statistics, or ones focusing e.g. on data visualization. And The Data Science Venn Diagram (here: hacking ~ programming): see also alternative Venn diagrams (this and that). Or even a tweet, while humorous, showing a balanced list of typical skills and activities of a data scientist: See also this post: Data scientist - statistician, programmer, consultant and visualizer?.
What is a data scientist? People define Data Science differently, but I think that the common part is: practical knowledge how to deal with data, practical programming skills. Contrary to its name, it's rarely "science". Tha
709
What is a data scientist?
There's a number of surveys of data science field. I like this one, because it attempts to analyze the profiles of people who actually hold data science jobs. Instead of using anecdotal evidence or author's biases, they use data science techniques to analyze data scientist DNA. It's quite revealing to look at the skills listed by data scientists. Notice the top 20 skills contain a lot of IT skills. In today’s world, a data scientist is expected to be a jack of all trades; a self-learner who has a solid quantitative foundation, an aptitude for programming, infinite intellectual curiosity, and great communication skills. UPDATE: I am a statistician, but am I a data scientist? I work on scientific problems so I must be a scientist! If you do PhD you're most likely a scientist already, especially, if you have published papers and active research. You don't need to be a scientist to be a data scientist, though. There are some roles at some firms, like Walmart (see below), where PhD is required, but usually data scientists have BS and MS degrees as you can see from examples below. As you can figure from the chart above, most likely, you'll be required to have good programming and data handling skills. Also, often data science is associated with some level, often "deep", of expertise in machine learning. You certainly may call yourself a data scientist if you have PhD in stats. However, PhD in computer science from top schools may be more competitive than stats graduates, because they may have quite strong applied statistics knowledge which is supplemented by strong programming skills - a sought after combination by employers. To counter them you have to acquire strong programming skills, so in a balance you'll be very competitive. What's interesting is that usually all stat PhDs will have some programming experience, but in data science often the requirement is much higher than that, employers want advanced skills, knowledge of algorithms and data structures, distributed computing etc. To me the advantage of having a PhD in stats is in the problem captured in the rest of the phrase "a jack of all trades" that is usually dropped: "a master of none". It's good to have people that know a little a bit of everything, but I always look for folks who know something deeply too, whether it's stats or computer science is not so important. What matters is that the guy is capable of getting to the bottom, it's a handy quality when you need it. The survey also lists the top employers of data scientists. Microsoft is on the top, apparently, which was surprising to me. If you want to get a better idea of what they're looking for, searching LinkeIn with "data science" in the Jobs section is helpful. Below is two excerpts from MS and Walmart's jobs in LinkedIn to make a point. Microsoft Data Scientist 5+ years of Software Development experience in building Data Processing Systems/Services Bachelors or higher qualifications in Computer Science, EE, or Math with specialization in Statistics, Data Mining or Machine Learning. Excellent Programming Skills (C#, Java, Python, Etc.) in manipulating large scale data Working knowledge of Hadoop or other Big Data processing technology Knowledge of analytics products (e.g. R, SQL AS, SAS, Mahout, etc.) is a plus. Notice, how knowing stat packages is just a plus, but excellent programming skills in Java is a requirement. Walmart, Data Scientist PhD in computer science or similar field or MS with at least 2-5 years of related experience Good functional coding skills in C++ or Java (Java is highly preferred) must be capable of spending up to 10% daily work day in writing production code in either C++/Java/Hadoop/Hive Expert level knowledge of one of the scripting languages such as Python or Perl. Experience working with large data sets and distributed computing tools a plus (Map/Reduce, Hadoop, Hive, Spark etc.) Here, PhD is preferred, but only computer science major is named. Distributed computing with Hadoop or Spark is probably an unusual skill for a statistician, but some theoretical physicists and applied mathematicians use similar tools. UPDATE 2: "It’s Already Time to Kill the “Data Scientist” Title" says Thomas Davenport who co-wrote the article in Harvard Business Review in 2012 titled "Data Scientist: The Sexiest Job of the 21st Century" that sort of started the data scientist craze: What does it mean today to say your are—or want to be, or want to hire—a “data scientist?” Not much, unfortunately.
What is a data scientist?
There's a number of surveys of data science field. I like this one, because it attempts to analyze the profiles of people who actually hold data science jobs. Instead of using anecdotal evidence or au
What is a data scientist? There's a number of surveys of data science field. I like this one, because it attempts to analyze the profiles of people who actually hold data science jobs. Instead of using anecdotal evidence or author's biases, they use data science techniques to analyze data scientist DNA. It's quite revealing to look at the skills listed by data scientists. Notice the top 20 skills contain a lot of IT skills. In today’s world, a data scientist is expected to be a jack of all trades; a self-learner who has a solid quantitative foundation, an aptitude for programming, infinite intellectual curiosity, and great communication skills. UPDATE: I am a statistician, but am I a data scientist? I work on scientific problems so I must be a scientist! If you do PhD you're most likely a scientist already, especially, if you have published papers and active research. You don't need to be a scientist to be a data scientist, though. There are some roles at some firms, like Walmart (see below), where PhD is required, but usually data scientists have BS and MS degrees as you can see from examples below. As you can figure from the chart above, most likely, you'll be required to have good programming and data handling skills. Also, often data science is associated with some level, often "deep", of expertise in machine learning. You certainly may call yourself a data scientist if you have PhD in stats. However, PhD in computer science from top schools may be more competitive than stats graduates, because they may have quite strong applied statistics knowledge which is supplemented by strong programming skills - a sought after combination by employers. To counter them you have to acquire strong programming skills, so in a balance you'll be very competitive. What's interesting is that usually all stat PhDs will have some programming experience, but in data science often the requirement is much higher than that, employers want advanced skills, knowledge of algorithms and data structures, distributed computing etc. To me the advantage of having a PhD in stats is in the problem captured in the rest of the phrase "a jack of all trades" that is usually dropped: "a master of none". It's good to have people that know a little a bit of everything, but I always look for folks who know something deeply too, whether it's stats or computer science is not so important. What matters is that the guy is capable of getting to the bottom, it's a handy quality when you need it. The survey also lists the top employers of data scientists. Microsoft is on the top, apparently, which was surprising to me. If you want to get a better idea of what they're looking for, searching LinkeIn with "data science" in the Jobs section is helpful. Below is two excerpts from MS and Walmart's jobs in LinkedIn to make a point. Microsoft Data Scientist 5+ years of Software Development experience in building Data Processing Systems/Services Bachelors or higher qualifications in Computer Science, EE, or Math with specialization in Statistics, Data Mining or Machine Learning. Excellent Programming Skills (C#, Java, Python, Etc.) in manipulating large scale data Working knowledge of Hadoop or other Big Data processing technology Knowledge of analytics products (e.g. R, SQL AS, SAS, Mahout, etc.) is a plus. Notice, how knowing stat packages is just a plus, but excellent programming skills in Java is a requirement. Walmart, Data Scientist PhD in computer science or similar field or MS with at least 2-5 years of related experience Good functional coding skills in C++ or Java (Java is highly preferred) must be capable of spending up to 10% daily work day in writing production code in either C++/Java/Hadoop/Hive Expert level knowledge of one of the scripting languages such as Python or Perl. Experience working with large data sets and distributed computing tools a plus (Map/Reduce, Hadoop, Hive, Spark etc.) Here, PhD is preferred, but only computer science major is named. Distributed computing with Hadoop or Spark is probably an unusual skill for a statistician, but some theoretical physicists and applied mathematicians use similar tools. UPDATE 2: "It’s Already Time to Kill the “Data Scientist” Title" says Thomas Davenport who co-wrote the article in Harvard Business Review in 2012 titled "Data Scientist: The Sexiest Job of the 21st Century" that sort of started the data scientist craze: What does it mean today to say your are—or want to be, or want to hire—a “data scientist?” Not much, unfortunately.
What is a data scientist? There's a number of surveys of data science field. I like this one, because it attempts to analyze the profiles of people who actually hold data science jobs. Instead of using anecdotal evidence or au
710
What is a data scientist?
Somewhere I've read this (EDIT: Josh Will's explaining his tweet): Data scientist is a person who is better at statistics than any programmer and better at programming than any statistician. This quote can be shortly explained by this data science process. The first look onto this scheme looks like "well, where is the programming part?", but if you have tons of data you have to be able to process them.
What is a data scientist?
Somewhere I've read this (EDIT: Josh Will's explaining his tweet): Data scientist is a person who is better at statistics than any programmer and better at programming than any statistician. This
What is a data scientist? Somewhere I've read this (EDIT: Josh Will's explaining his tweet): Data scientist is a person who is better at statistics than any programmer and better at programming than any statistician. This quote can be shortly explained by this data science process. The first look onto this scheme looks like "well, where is the programming part?", but if you have tons of data you have to be able to process them.
What is a data scientist? Somewhere I've read this (EDIT: Josh Will's explaining his tweet): Data scientist is a person who is better at statistics than any programmer and better at programming than any statistician. This
711
What is a data scientist?
I've written several answers and each time they got long and I eventually decided I was getting up on a soapbox. But I think that this conversation has not fully explored two important factors: The Science in Data Science. A scientific approach is one in which you try to destroy your own models, theories, features, technique choices, etc, and only when you cannot do so do you accept that your results might be useful. It's a mindset and many of the best Data Scientists I've met have hard-science backgrounds (chemistry, biology, engineering). Data Science is a broad field. A good Data Science outcome usually involves a small team of Data Scientists, each with their own speciality. For example, one team member is more rigorous and statistical, another is a better programmer with an engineering background, and another is a strong consultant with business savvy. All three are quick to learn the subject matter, and all three are curious and want to find the truth -- however painful -- and to do what's in the best interest of the (internal or external) customer, even if the customer doesn't understand. The fad over the last few years -- now fading, I think -- is to recruit Computer Scientists who have mastered cluster technologies (Hadoop ecosystem, etc) and say that's the ideal Data Scientist. I think that's what the OP has encountered, and I'd advise the OP to push their strengths in rigor, correctness, and scientific thinking.
What is a data scientist?
I've written several answers and each time they got long and I eventually decided I was getting up on a soapbox. But I think that this conversation has not fully explored two important factors: The S
What is a data scientist? I've written several answers and each time they got long and I eventually decided I was getting up on a soapbox. But I think that this conversation has not fully explored two important factors: The Science in Data Science. A scientific approach is one in which you try to destroy your own models, theories, features, technique choices, etc, and only when you cannot do so do you accept that your results might be useful. It's a mindset and many of the best Data Scientists I've met have hard-science backgrounds (chemistry, biology, engineering). Data Science is a broad field. A good Data Science outcome usually involves a small team of Data Scientists, each with their own speciality. For example, one team member is more rigorous and statistical, another is a better programmer with an engineering background, and another is a strong consultant with business savvy. All three are quick to learn the subject matter, and all three are curious and want to find the truth -- however painful -- and to do what's in the best interest of the (internal or external) customer, even if the customer doesn't understand. The fad over the last few years -- now fading, I think -- is to recruit Computer Scientists who have mastered cluster technologies (Hadoop ecosystem, etc) and say that's the ideal Data Scientist. I think that's what the OP has encountered, and I'd advise the OP to push their strengths in rigor, correctness, and scientific thinking.
What is a data scientist? I've written several answers and each time they got long and I eventually decided I was getting up on a soapbox. But I think that this conversation has not fully explored two important factors: The S
712
What is a data scientist?
I think Bitwise covers most of my answer but I am gonna add my 2c. No, I am sorry but a statistician is not a data scientist, at least based on how most companies define the role today. Note that the definition has changed over time, and one challenge of the practitioners is to make sure they remain relevant. I will share some common reasons on why we reject candidates for "Data Scientist" roles: Expectations about the scope of the job. Typically the DS needs to be able to work independently. That means there's nobody else to create the dataset for him in order to solve the problem he was assigned. So, he needs to be able to find the data sources, query them, model a solution and then, often, also create a prototype that solves the problem. Many times that is simply the creation of a dashboard, an alarm, or a live report that constantly updates. Communication. It seems, that many statisticians have a hard time "simplifying" and "selling" their ideas to business people. Can you show just one graph and tell a story from the data in a way that everybody in the room can get it? Note, that this is after you secure that you can defend every bit of the analysis if challenged. Coding skills. We don't need production level coding skills, since we have developers for that, however, we need her to be able to write a prototype and deploy it as a web service in an AWS EC2 instance. So, coding skills doesn't mean ability to write R scripts. I can add fluency in Linux somewhere here probably. So, the bar is simply higher to what most statisticians tend to believe. SQL and databases. No, he can't pick up that on the job, since we actually need him to adapt the basic SQL he already knows and learn how to query the multiple different DB systems we use across the org including Redshift, HIVE and Presto - each of which uses its own flavour of SQL. Plus, learning SQL on the job means the candidate will create problems in every other analyst until they learn how to write efficient queries. Machine Learning. Typically they have used Logistic Regression or few other techniques to solve a problem based on a given dataset (Kaggle style). However, even that the interview starts from algorithms and methods, it soon focus on topics such as feature generation (remember you need to create the dataset, there's nobody else to create it for you), maintainability, scalability and performance as well as the related trade offs. For some context you can check out a relevant paper from Google published in NIPS 2015. Text Analysis. Not a must have, but some experience in Natural Language Processing is good to have. After all, a big portion of the data is in textual format. As discussed there's nobody else to make the transformations and clean up the text for you in order to make it consumable by a ML or other statistical approach. Also, note that today even CS grads already have done some project that ticks this box. Of course for a junior role you can't have all the above. But, how many of these skills can you afford missing and pick up on the job? Finally, to clarify, the most common reason for rejecting non-statisticians is exactly the lack of even basic knowledge of stats. And somewhere there is the difference between a data engineer and a data scientist. Nevertheless, data engineers tend to apply for these roles, since many times they believe that "statistics" is just the average, the variance and the normal distribution. So, we may add a few relevant but scary statistical buzzwords in job descriptions in order to clarify what we mean by "statistics" and prevent the confusion.
What is a data scientist?
I think Bitwise covers most of my answer but I am gonna add my 2c. No, I am sorry but a statistician is not a data scientist, at least based on how most companies define the role today. Note that the
What is a data scientist? I think Bitwise covers most of my answer but I am gonna add my 2c. No, I am sorry but a statistician is not a data scientist, at least based on how most companies define the role today. Note that the definition has changed over time, and one challenge of the practitioners is to make sure they remain relevant. I will share some common reasons on why we reject candidates for "Data Scientist" roles: Expectations about the scope of the job. Typically the DS needs to be able to work independently. That means there's nobody else to create the dataset for him in order to solve the problem he was assigned. So, he needs to be able to find the data sources, query them, model a solution and then, often, also create a prototype that solves the problem. Many times that is simply the creation of a dashboard, an alarm, or a live report that constantly updates. Communication. It seems, that many statisticians have a hard time "simplifying" and "selling" their ideas to business people. Can you show just one graph and tell a story from the data in a way that everybody in the room can get it? Note, that this is after you secure that you can defend every bit of the analysis if challenged. Coding skills. We don't need production level coding skills, since we have developers for that, however, we need her to be able to write a prototype and deploy it as a web service in an AWS EC2 instance. So, coding skills doesn't mean ability to write R scripts. I can add fluency in Linux somewhere here probably. So, the bar is simply higher to what most statisticians tend to believe. SQL and databases. No, he can't pick up that on the job, since we actually need him to adapt the basic SQL he already knows and learn how to query the multiple different DB systems we use across the org including Redshift, HIVE and Presto - each of which uses its own flavour of SQL. Plus, learning SQL on the job means the candidate will create problems in every other analyst until they learn how to write efficient queries. Machine Learning. Typically they have used Logistic Regression or few other techniques to solve a problem based on a given dataset (Kaggle style). However, even that the interview starts from algorithms and methods, it soon focus on topics such as feature generation (remember you need to create the dataset, there's nobody else to create it for you), maintainability, scalability and performance as well as the related trade offs. For some context you can check out a relevant paper from Google published in NIPS 2015. Text Analysis. Not a must have, but some experience in Natural Language Processing is good to have. After all, a big portion of the data is in textual format. As discussed there's nobody else to make the transformations and clean up the text for you in order to make it consumable by a ML or other statistical approach. Also, note that today even CS grads already have done some project that ticks this box. Of course for a junior role you can't have all the above. But, how many of these skills can you afford missing and pick up on the job? Finally, to clarify, the most common reason for rejecting non-statisticians is exactly the lack of even basic knowledge of stats. And somewhere there is the difference between a data engineer and a data scientist. Nevertheless, data engineers tend to apply for these roles, since many times they believe that "statistics" is just the average, the variance and the normal distribution. So, we may add a few relevant but scary statistical buzzwords in job descriptions in order to clarify what we mean by "statistics" and prevent the confusion.
What is a data scientist? I think Bitwise covers most of my answer but I am gonna add my 2c. No, I am sorry but a statistician is not a data scientist, at least based on how most companies define the role today. Note that the
713
What is a data scientist?
Allow me to ignore the hype and buzzwords. I think "Data Scientist" (or whatever you want to call it) is a real thing and that is distinct from a statistician. There are many types of positions that effectively are data scientists but are not given that name - one example is people working in genomics. The way I see it, a data scientist is someone that has the skills and expertise to design and execute research on large amounts of complex data (e.g. highly dimensional in which the underlying mechanisms are unknown and complex). This means: Programming: Being able to implement analysis and pipelines, often requiring some level of parallelization and interfacing with databases and high-performance computing resources. Computer Science (algorithms): Designing/choosing efficient algorithms such that chosen analysis is feasible and error rate is controlled. Sometimes this may also require knowledge of numerical analysis, optimization, etc. Computer science / statistics (usually emphasis on machine learning): Designing and implementing a framework in order to ask questions on the data or find "patterns" in it. This would include not only knowledge of different tests/tools/algorithms but also how to design proper holdout, cross-validation and so on. Modelling: Often we would like to be able to produce some model that gives a simpler representation of the data such that we can both make useful predictions and gain insight into the mechanisms underlying the data. Probabilistic models are very popular for this. Domain-specific expertise: One key aspect of successfully working with complex data is incorporating domain-specific insight. So I would say that it is critical that the data scientist either have expertise in the domain, be able to quickly learn new fields, or should be able to interface well with experts in the field that can yield useful insights about how to approach the data.
What is a data scientist?
Allow me to ignore the hype and buzzwords. I think "Data Scientist" (or whatever you want to call it) is a real thing and that is distinct from a statistician. There are many types of positions that e
What is a data scientist? Allow me to ignore the hype and buzzwords. I think "Data Scientist" (or whatever you want to call it) is a real thing and that is distinct from a statistician. There are many types of positions that effectively are data scientists but are not given that name - one example is people working in genomics. The way I see it, a data scientist is someone that has the skills and expertise to design and execute research on large amounts of complex data (e.g. highly dimensional in which the underlying mechanisms are unknown and complex). This means: Programming: Being able to implement analysis and pipelines, often requiring some level of parallelization and interfacing with databases and high-performance computing resources. Computer Science (algorithms): Designing/choosing efficient algorithms such that chosen analysis is feasible and error rate is controlled. Sometimes this may also require knowledge of numerical analysis, optimization, etc. Computer science / statistics (usually emphasis on machine learning): Designing and implementing a framework in order to ask questions on the data or find "patterns" in it. This would include not only knowledge of different tests/tools/algorithms but also how to design proper holdout, cross-validation and so on. Modelling: Often we would like to be able to produce some model that gives a simpler representation of the data such that we can both make useful predictions and gain insight into the mechanisms underlying the data. Probabilistic models are very popular for this. Domain-specific expertise: One key aspect of successfully working with complex data is incorporating domain-specific insight. So I would say that it is critical that the data scientist either have expertise in the domain, be able to quickly learn new fields, or should be able to interface well with experts in the field that can yield useful insights about how to approach the data.
What is a data scientist? Allow me to ignore the hype and buzzwords. I think "Data Scientist" (or whatever you want to call it) is a real thing and that is distinct from a statistician. There are many types of positions that e
714
What is a data scientist?
All great answers, however in my job hunting experience I have noted that the term "data scientist" has been confounded with "junior data analyst" in the minds of the recruiters that I was in contact with. Thus many nice folks with no statistics experience apart from that introductory one term course they did a couple of years ago now call themselves data scientists. As someone who with a computer science background and years of experience as a data analyst, I did a PhD in Statistics later in my career thinking it would help me stand out from the crowd, I find myself in an unexpectedly large crowd of "data scientists". I think that I might revert to "statistician"!
What is a data scientist?
All great answers, however in my job hunting experience I have noted that the term "data scientist" has been confounded with "junior data analyst" in the minds of the recruiters that I was in contact
What is a data scientist? All great answers, however in my job hunting experience I have noted that the term "data scientist" has been confounded with "junior data analyst" in the minds of the recruiters that I was in contact with. Thus many nice folks with no statistics experience apart from that introductory one term course they did a couple of years ago now call themselves data scientists. As someone who with a computer science background and years of experience as a data analyst, I did a PhD in Statistics later in my career thinking it would help me stand out from the crowd, I find myself in an unexpectedly large crowd of "data scientists". I think that I might revert to "statistician"!
What is a data scientist? All great answers, however in my job hunting experience I have noted that the term "data scientist" has been confounded with "junior data analyst" in the minds of the recruiters that I was in contact
715
What is a data scientist?
I'm a junior employee, but my job title is "data scientist." I think Bitwise's answer is an apt description of what I was hired to do, but I'd like to add one more point based on my day-to-day experience at work: $$\text{Data Science} \neq \text{Statistics},$$ $$\text{Statistics} \in \text{Data Science}.$$ Science is a process of inquiry. When data is the means by which that inquiry is made, data science is happening. It doesn't mean that everyone who experiments or does research with data is necessarily a data scientist, in the same way that not everyone who experiments or does research with wiring is necessarily an electrical engineer. But it does mean that one can acquire enough training to become a professional "data inquirer," in the same way that one can acquire enough training to become a professional electrician. That training is more or less comprised of the points in Bitwise's answer, of which statistics is a component but not the entirety. Piotr's answer is also a nice summary of all the things I need to do wish I knew how to do in a given week. My job so far has mostly been helping to undo the damage done by former employees who belonged to the "Danger Zone" component of the Venn diagram.
What is a data scientist?
I'm a junior employee, but my job title is "data scientist." I think Bitwise's answer is an apt description of what I was hired to do, but I'd like to add one more point based on my day-to-day experie
What is a data scientist? I'm a junior employee, but my job title is "data scientist." I think Bitwise's answer is an apt description of what I was hired to do, but I'd like to add one more point based on my day-to-day experience at work: $$\text{Data Science} \neq \text{Statistics},$$ $$\text{Statistics} \in \text{Data Science}.$$ Science is a process of inquiry. When data is the means by which that inquiry is made, data science is happening. It doesn't mean that everyone who experiments or does research with data is necessarily a data scientist, in the same way that not everyone who experiments or does research with wiring is necessarily an electrical engineer. But it does mean that one can acquire enough training to become a professional "data inquirer," in the same way that one can acquire enough training to become a professional electrician. That training is more or less comprised of the points in Bitwise's answer, of which statistics is a component but not the entirety. Piotr's answer is also a nice summary of all the things I need to do wish I knew how to do in a given week. My job so far has mostly been helping to undo the damage done by former employees who belonged to the "Danger Zone" component of the Venn diagram.
What is a data scientist? I'm a junior employee, but my job title is "data scientist." I think Bitwise's answer is an apt description of what I was hired to do, but I'd like to add one more point based on my day-to-day experie
716
What is a data scientist?
I have also recently become interested in data science as a career, and when I think of what I learnt about the data science job in comparison to the numerous statistics courses that I took (and enjoyed!), I started to think of data scientists as computer scientists who turned their attention to data. In particular, I noted the following main differences. Note though that the differences appear mood. The following just reflects my subjective impressions, and I do not claim generality. Just my impressions! In statistics, you care a lot about distributions, probabilities, and inferential procedures (how to do hypothesis tests, which are the underlying distributions, etc). From what I understand, data science is more often than not about prediction, and worries about inferential statements are to some extent absorbed by procedures from computer science, such as cross-validation. In statistical courses, I often just created my own data, or used some ready made data that is available in a rather clean format. That means it is in a nice rectangular format, some excel spreadsheet, or something like that that fits nicely into RAM. Data cleaning surely is involved, but I never had to deal with "extracting" data from the web, let alone from databases that had to be set up in order to hold an amount of data that does not fit into RAM anymore. My impression is that this computational aspect is much more dominant in data science. Maybe this reflects my ignorance about what statisticians do in typical statistical jobs, but before data science I never thought about building models into a larger product. There was an analysis to be done, a statistical problem to be solved, some parameter to be estimated, and that is it. In data science, it seems that often (though not always) predictive models are built into a larger something. For instance, you click somewhere, and within milliseconds, a predictive algorithm will have decided what is being shown as a result. So, while in statistics, I always wondered "what parameter can we estimate, and how do we do it elegantly", it seems that in data science the focus is more on "what can we predict that is potentially useful in a data product". Again, the above does not try to give a general definition. I am just pointing out the major differences that I have perceived myself. I am not in data science yet, but I hope to transition in the next year. In this sense take my two cents here with a grain of salt.
What is a data scientist?
I have also recently become interested in data science as a career, and when I think of what I learnt about the data science job in comparison to the numerous statistics courses that I took (and enjoy
What is a data scientist? I have also recently become interested in data science as a career, and when I think of what I learnt about the data science job in comparison to the numerous statistics courses that I took (and enjoyed!), I started to think of data scientists as computer scientists who turned their attention to data. In particular, I noted the following main differences. Note though that the differences appear mood. The following just reflects my subjective impressions, and I do not claim generality. Just my impressions! In statistics, you care a lot about distributions, probabilities, and inferential procedures (how to do hypothesis tests, which are the underlying distributions, etc). From what I understand, data science is more often than not about prediction, and worries about inferential statements are to some extent absorbed by procedures from computer science, such as cross-validation. In statistical courses, I often just created my own data, or used some ready made data that is available in a rather clean format. That means it is in a nice rectangular format, some excel spreadsheet, or something like that that fits nicely into RAM. Data cleaning surely is involved, but I never had to deal with "extracting" data from the web, let alone from databases that had to be set up in order to hold an amount of data that does not fit into RAM anymore. My impression is that this computational aspect is much more dominant in data science. Maybe this reflects my ignorance about what statisticians do in typical statistical jobs, but before data science I never thought about building models into a larger product. There was an analysis to be done, a statistical problem to be solved, some parameter to be estimated, and that is it. In data science, it seems that often (though not always) predictive models are built into a larger something. For instance, you click somewhere, and within milliseconds, a predictive algorithm will have decided what is being shown as a result. So, while in statistics, I always wondered "what parameter can we estimate, and how do we do it elegantly", it seems that in data science the focus is more on "what can we predict that is potentially useful in a data product". Again, the above does not try to give a general definition. I am just pointing out the major differences that I have perceived myself. I am not in data science yet, but I hope to transition in the next year. In this sense take my two cents here with a grain of salt.
What is a data scientist? I have also recently become interested in data science as a career, and when I think of what I learnt about the data science job in comparison to the numerous statistics courses that I took (and enjoy
717
What is a data scientist?
I always like to cut to the essence of the matter. statistics - science + some computer stuff + hype = data science
What is a data scientist?
I always like to cut to the essence of the matter. statistics - science + some computer stuff + hype = data science
What is a data scientist? I always like to cut to the essence of the matter. statistics - science + some computer stuff + hype = data science
What is a data scientist? I always like to cut to the essence of the matter. statistics - science + some computer stuff + hype = data science
718
What is a data scientist?
I say a Data Scientist is a role where one creates human-readable results for business, using the methods to make the result statistically solid (significant). If any part of this definition is not followed we talk about either a developer, a true scientist / statistician, or a data engineer.
What is a data scientist?
I say a Data Scientist is a role where one creates human-readable results for business, using the methods to make the result statistically solid (significant). If any part of this definition is not f
What is a data scientist? I say a Data Scientist is a role where one creates human-readable results for business, using the methods to make the result statistically solid (significant). If any part of this definition is not followed we talk about either a developer, a true scientist / statistician, or a data engineer.
What is a data scientist? I say a Data Scientist is a role where one creates human-readable results for business, using the methods to make the result statistically solid (significant). If any part of this definition is not f
719
What is a data scientist?
Data science is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve analytically complex problems. But due to dearth of Data Scientists, a career in data science can really create numerous opportunities. However, organizations are looking for certified professionals from SAS, Data Science Council of America (DASCA), Hortonworks etc. Hope this is a good information!
What is a data scientist?
Data science is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve analytically complex problems. But due to dearth of Data Scientists, a career in da
What is a data scientist? Data science is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve analytically complex problems. But due to dearth of Data Scientists, a career in data science can really create numerous opportunities. However, organizations are looking for certified professionals from SAS, Data Science Council of America (DASCA), Hortonworks etc. Hope this is a good information!
What is a data scientist? Data science is a multidisciplinary blend of data inference, algorithm development, and technology in order to solve analytically complex problems. But due to dearth of Data Scientists, a career in da
720
How to determine which distribution fits my data best?
First, here are some quick comments: The $p$-values of a Kolmogorov-Smirnov-Test (KS-Test) with estimated parameters can be quite wrong because the p-value does not take the uncertainty of the estimation into account. So unfortunately, you can't just fit a distribution and then use the estimated parameters in a Kolmogorov-Smirnov-Test to test your sample. There is a normality test called Lilliefors test which is a modified version of the KS-Test that allows for estimated parameters. Your sample will never follow a specific distribution exactly. So even if your $p$-values from the KS-Test would be valid and $>0.05$, it would just mean that you can't rule out that your data follow this specific distribution. Another formulation would be that your sample is compatible with a certain distribution. But the answer to the question "Does my data follow the distribution xy exactly?" is always no. The goal here cannot be to determine with certainty what distribution your sample follows. The goal is what @whuber (in the comments) calls parsimonious approximate descriptions of the data. Having a specific parametric distribution can be useful as a model of the data (such as the model "earth is a sphere" can be useful). But let's do some exploration. I will use the excellent fitdistrplus package which offers some nice functions for distribution fitting. We will use the functiondescdist to gain some ideas about possible candidate distributions. library(fitdistrplus) library(logspline) x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00, 38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40, 42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40, 49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60, 45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30, 36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00, 38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34) Now let's use descdist: descdist(x, discrete = FALSE) The kurtosis and squared skewness of your sample are plotted as a blue point named "Observation". It seems that possible distributions include the Weibull, Lognormal and possibly the Gamma distribution. Let's fit a Weibull distribution and a normal distribution: fit.weibull <- fitdist(x, "weibull") fit.norm <- fitdist(x, "norm") Now inspect the fit for the normal: plot(fit.norm) And for the Weibull fit: plot(fit.weibull) Both look good but judged by the QQ-Plot, the Weibull maybe looks a bit better, especially in the tails. Correspondingly, the AIC of the Weibull fit is lower compared with the normal fit: fit.weibull$aic [1] 519.8537 fit.norm$aic [1] 523.3079 Kolmogorov-Smirnov test simulation I will use @Aksakal's procedure explained here to simulate the KS-statistic under the null. n.sims <- 5e4 stats <- replicate(n.sims, { r <- rweibull(n = length(x) , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"] ) estfit.weibull <- fitdist(r, "weibull") # added to account for the estimated parameters as.numeric(ks.test(r , "pweibull" , shape= estfit.weibull$estimate["shape"] , scale = estfit.weibull$estimate["scale"])$statistic ) }) The ECDF of the simulated KS-statistics looks as follows: plot(ecdf(stats), las = 1, main = "KS-test statistic simulation (CDF)", col = "darkorange", lwd = 1.7) grid() Finally, our $p$-value using the simulated null distribution of the KS-statistics is: fit <- logspline(stats) 1 - plogspline(ks.test(x , "pweibull" , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"])$statistic , fit ) [1] 0.4889511 This confirms our graphical conclusion that the sample is compatible with a Weibull distribution. As explained here, we can use bootstrapping to add pointwise confidence intervals to the estimated Weibull PDF or CDF: xs <- seq(10, 65, len=500) true.weibull <- rweibull(1e6, shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"]) boot.pdf <- sapply(1:1000, function(i) { xi <- sample(x, size=length(x), replace=TRUE) MLE.est <- suppressWarnings(fitdist(xi, distr="weibull")) dweibull(xs, shape=MLE.est$estimate["shape"], scale = MLE.est$estimate["scale"]) } ) boot.cdf <- sapply(1:1000, function(i) { xi <- sample(x, size=length(x), replace=TRUE) MLE.est <- suppressWarnings(fitdist(xi, distr="weibull")) pweibull(xs, shape= MLE.est$estimate["shape"], scale = MLE.est$estimate["scale"]) } ) #----------------------------------------------------------------------------- # Plot PDF #----------------------------------------------------------------------------- par(bg="white", las=1, cex=1.2) plot(xs, boot.pdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf), xlab="x", ylab="Probability density") for(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1)) # Add pointwise confidence bands quants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975)) min.point <- apply(boot.pdf, 1, min, na.rm=TRUE) max.point <- apply(boot.pdf, 1, max, na.rm=TRUE) lines(xs, quants[1, ], col="red", lwd=1.5, lty=2) lines(xs, quants[3, ], col="red", lwd=1.5, lty=2) lines(xs, quants[2, ], col="darkred", lwd=2) #----------------------------------------------------------------------------- # Plot CDF #----------------------------------------------------------------------------- par(bg="white", las=1, cex=1.2) plot(xs, boot.cdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf), xlab="x", ylab="F(x)") for(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1)) # Add pointwise confidence bands quants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975)) min.point <- apply(boot.cdf, 1, min, na.rm=TRUE) max.point <- apply(boot.cdf, 1, max, na.rm=TRUE) lines(xs, quants[1, ], col="red", lwd=1.5, lty=2) lines(xs, quants[3, ], col="red", lwd=1.5, lty=2) lines(xs, quants[2, ], col="darkred", lwd=2) #lines(xs, min.point, col="purple") #lines(xs, max.point, col="purple") Automatic distribution fitting with GAMLSS The gamlss package for R offers the ability to try many different distributions and select the "best" according to the GAIC (the generalized Akaike information criterion). The main function is fitDist. An important option in this function is the type of the distributions that are tried. For example, setting type = "realline" will try all implemented distributions defined on the whole real line whereas type = "realsplus" will only try distributions defined on the real positive line. Another important option is the parameter $k$, which is the penalty for the GAIC. In the example below, I set the parameter $k = 2$ which means that the "best" distribution is selected according to the classic AIC. You can set $k$ to anything you like, such as $\log(n)$ for the BIC. library(gamlss) library(gamlss.dist) library(gamlss.add) x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00, 38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40, 42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40, 49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60, 45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30, 36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00, 38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34) fit <- fitDist(x, k = 2, type = "realplus", trace = FALSE, try.gamlss = TRUE) summary(fit) ******************************************************************* Family: c("WEI2", "Weibull type 2") Call: gamlssML(formula = y, family = DIST[i], data = sys.parent()) Fitting method: "nlminb" Coefficient(s): Estimate Std. Error t value Pr(>|t|) eta.mu -24.3468041 2.2141197 -10.9962 < 2.22e-16 *** eta.sigma 1.8661380 0.0892799 20.9021 < 2.22e-16 *** According to the AIC, the Weibull distribution (more specifically WEI2, a special parametrization of it) fits the data best. The exact parameterization of the distribution WEI2 is detailed in this document on page 279. Let's inspect the fit by looking at the residuals in a worm plot (basically a de-trended Q-Q-plot): We expect the residuals to be close to the middle horizontal line and 95% of them to lie between the upper and lower dotted curves, which act as 95% pointwise confidence intervals. In this case, the worm plot looks fine to me indicating that the Weibull distribution is an adequate fit.
How to determine which distribution fits my data best?
First, here are some quick comments: The $p$-values of a Kolmogorov-Smirnov-Test (KS-Test) with estimated parameters can be quite wrong because the p-value does not take the uncertainty of the estima
How to determine which distribution fits my data best? First, here are some quick comments: The $p$-values of a Kolmogorov-Smirnov-Test (KS-Test) with estimated parameters can be quite wrong because the p-value does not take the uncertainty of the estimation into account. So unfortunately, you can't just fit a distribution and then use the estimated parameters in a Kolmogorov-Smirnov-Test to test your sample. There is a normality test called Lilliefors test which is a modified version of the KS-Test that allows for estimated parameters. Your sample will never follow a specific distribution exactly. So even if your $p$-values from the KS-Test would be valid and $>0.05$, it would just mean that you can't rule out that your data follow this specific distribution. Another formulation would be that your sample is compatible with a certain distribution. But the answer to the question "Does my data follow the distribution xy exactly?" is always no. The goal here cannot be to determine with certainty what distribution your sample follows. The goal is what @whuber (in the comments) calls parsimonious approximate descriptions of the data. Having a specific parametric distribution can be useful as a model of the data (such as the model "earth is a sphere" can be useful). But let's do some exploration. I will use the excellent fitdistrplus package which offers some nice functions for distribution fitting. We will use the functiondescdist to gain some ideas about possible candidate distributions. library(fitdistrplus) library(logspline) x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00, 38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40, 42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40, 49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60, 45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30, 36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00, 38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34) Now let's use descdist: descdist(x, discrete = FALSE) The kurtosis and squared skewness of your sample are plotted as a blue point named "Observation". It seems that possible distributions include the Weibull, Lognormal and possibly the Gamma distribution. Let's fit a Weibull distribution and a normal distribution: fit.weibull <- fitdist(x, "weibull") fit.norm <- fitdist(x, "norm") Now inspect the fit for the normal: plot(fit.norm) And for the Weibull fit: plot(fit.weibull) Both look good but judged by the QQ-Plot, the Weibull maybe looks a bit better, especially in the tails. Correspondingly, the AIC of the Weibull fit is lower compared with the normal fit: fit.weibull$aic [1] 519.8537 fit.norm$aic [1] 523.3079 Kolmogorov-Smirnov test simulation I will use @Aksakal's procedure explained here to simulate the KS-statistic under the null. n.sims <- 5e4 stats <- replicate(n.sims, { r <- rweibull(n = length(x) , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"] ) estfit.weibull <- fitdist(r, "weibull") # added to account for the estimated parameters as.numeric(ks.test(r , "pweibull" , shape= estfit.weibull$estimate["shape"] , scale = estfit.weibull$estimate["scale"])$statistic ) }) The ECDF of the simulated KS-statistics looks as follows: plot(ecdf(stats), las = 1, main = "KS-test statistic simulation (CDF)", col = "darkorange", lwd = 1.7) grid() Finally, our $p$-value using the simulated null distribution of the KS-statistics is: fit <- logspline(stats) 1 - plogspline(ks.test(x , "pweibull" , shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"])$statistic , fit ) [1] 0.4889511 This confirms our graphical conclusion that the sample is compatible with a Weibull distribution. As explained here, we can use bootstrapping to add pointwise confidence intervals to the estimated Weibull PDF or CDF: xs <- seq(10, 65, len=500) true.weibull <- rweibull(1e6, shape= fit.weibull$estimate["shape"] , scale = fit.weibull$estimate["scale"]) boot.pdf <- sapply(1:1000, function(i) { xi <- sample(x, size=length(x), replace=TRUE) MLE.est <- suppressWarnings(fitdist(xi, distr="weibull")) dweibull(xs, shape=MLE.est$estimate["shape"], scale = MLE.est$estimate["scale"]) } ) boot.cdf <- sapply(1:1000, function(i) { xi <- sample(x, size=length(x), replace=TRUE) MLE.est <- suppressWarnings(fitdist(xi, distr="weibull")) pweibull(xs, shape= MLE.est$estimate["shape"], scale = MLE.est$estimate["scale"]) } ) #----------------------------------------------------------------------------- # Plot PDF #----------------------------------------------------------------------------- par(bg="white", las=1, cex=1.2) plot(xs, boot.pdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf), xlab="x", ylab="Probability density") for(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1)) # Add pointwise confidence bands quants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975)) min.point <- apply(boot.pdf, 1, min, na.rm=TRUE) max.point <- apply(boot.pdf, 1, max, na.rm=TRUE) lines(xs, quants[1, ], col="red", lwd=1.5, lty=2) lines(xs, quants[3, ], col="red", lwd=1.5, lty=2) lines(xs, quants[2, ], col="darkred", lwd=2) #----------------------------------------------------------------------------- # Plot CDF #----------------------------------------------------------------------------- par(bg="white", las=1, cex=1.2) plot(xs, boot.cdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf), xlab="x", ylab="F(x)") for(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1)) # Add pointwise confidence bands quants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975)) min.point <- apply(boot.cdf, 1, min, na.rm=TRUE) max.point <- apply(boot.cdf, 1, max, na.rm=TRUE) lines(xs, quants[1, ], col="red", lwd=1.5, lty=2) lines(xs, quants[3, ], col="red", lwd=1.5, lty=2) lines(xs, quants[2, ], col="darkred", lwd=2) #lines(xs, min.point, col="purple") #lines(xs, max.point, col="purple") Automatic distribution fitting with GAMLSS The gamlss package for R offers the ability to try many different distributions and select the "best" according to the GAIC (the generalized Akaike information criterion). The main function is fitDist. An important option in this function is the type of the distributions that are tried. For example, setting type = "realline" will try all implemented distributions defined on the whole real line whereas type = "realsplus" will only try distributions defined on the real positive line. Another important option is the parameter $k$, which is the penalty for the GAIC. In the example below, I set the parameter $k = 2$ which means that the "best" distribution is selected according to the classic AIC. You can set $k$ to anything you like, such as $\log(n)$ for the BIC. library(gamlss) library(gamlss.dist) library(gamlss.add) x <- c(37.50,46.79,48.30,46.04,43.40,39.25,38.49,49.51,40.38,36.98,40.00, 38.49,37.74,47.92,44.53,44.91,44.91,40.00,41.51,47.92,36.98,43.40, 42.26,41.89,38.87,43.02,39.25,40.38,42.64,36.98,44.15,44.91,43.40, 49.81,38.87,40.00,52.45,53.13,47.92,52.45,44.91,29.54,27.13,35.60, 45.34,43.37,54.15,42.77,42.88,44.26,27.14,39.31,24.80,16.62,30.30, 36.39,28.60,28.53,35.84,31.10,34.55,52.65,48.81,43.42,52.49,38.00, 38.65,34.54,37.70,38.11,43.05,29.95,32.48,24.63,35.33,41.34) fit <- fitDist(x, k = 2, type = "realplus", trace = FALSE, try.gamlss = TRUE) summary(fit) ******************************************************************* Family: c("WEI2", "Weibull type 2") Call: gamlssML(formula = y, family = DIST[i], data = sys.parent()) Fitting method: "nlminb" Coefficient(s): Estimate Std. Error t value Pr(>|t|) eta.mu -24.3468041 2.2141197 -10.9962 < 2.22e-16 *** eta.sigma 1.8661380 0.0892799 20.9021 < 2.22e-16 *** According to the AIC, the Weibull distribution (more specifically WEI2, a special parametrization of it) fits the data best. The exact parameterization of the distribution WEI2 is detailed in this document on page 279. Let's inspect the fit by looking at the residuals in a worm plot (basically a de-trended Q-Q-plot): We expect the residuals to be close to the middle horizontal line and 95% of them to lie between the upper and lower dotted curves, which act as 95% pointwise confidence intervals. In this case, the worm plot looks fine to me indicating that the Weibull distribution is an adequate fit.
How to determine which distribution fits my data best? First, here are some quick comments: The $p$-values of a Kolmogorov-Smirnov-Test (KS-Test) with estimated parameters can be quite wrong because the p-value does not take the uncertainty of the estima
721
How to determine which distribution fits my data best?
Plots are mostly a good way to get a better idea of what your data looks like. In your case I would recommend plotting the empirical cumulative distribution function (ecdf) against the theoretical cdfs with the parameters you got from fitdistr(). I did that once for my data and also included the confidence intervals. Here is the picture I got using ggplot2(). The black line is the empirical cumulative distribution function and the colored lines are cdfs from different distributions using parameters I got using the Maximum Likelihood method. One can easily see that the exponential and normal distribution are not a good fit to the data, because the lines have a different form than the ecdf and lines are quite far away from the ecdf. Unfortunately the other distribtions are quite close. But I would say that the logNormal line is the closest to the black line. Using a measure of distance (for example MSE) one could validate the assumption. If you only have two competing distributions (for example picking the ones that seem to fit best in the plot) you could use a Likelihood-Ratio-Test to test which distributions fits better.
How to determine which distribution fits my data best?
Plots are mostly a good way to get a better idea of what your data looks like. In your case I would recommend plotting the empirical cumulative distribution function (ecdf) against the theoretical cdf
How to determine which distribution fits my data best? Plots are mostly a good way to get a better idea of what your data looks like. In your case I would recommend plotting the empirical cumulative distribution function (ecdf) against the theoretical cdfs with the parameters you got from fitdistr(). I did that once for my data and also included the confidence intervals. Here is the picture I got using ggplot2(). The black line is the empirical cumulative distribution function and the colored lines are cdfs from different distributions using parameters I got using the Maximum Likelihood method. One can easily see that the exponential and normal distribution are not a good fit to the data, because the lines have a different form than the ecdf and lines are quite far away from the ecdf. Unfortunately the other distribtions are quite close. But I would say that the logNormal line is the closest to the black line. Using a measure of distance (for example MSE) one could validate the assumption. If you only have two competing distributions (for example picking the ones that seem to fit best in the plot) you could use a Likelihood-Ratio-Test to test which distributions fits better.
How to determine which distribution fits my data best? Plots are mostly a good way to get a better idea of what your data looks like. In your case I would recommend plotting the empirical cumulative distribution function (ecdf) against the theoretical cdf
722
How to determine which distribution fits my data best?
Apart from the above-mentioned ways, another approach is to fit as many distributions as you can and estimate their parameters, then compare the AIC and select the best model that fits your data. You dont need to do that on your own, there are several packages available in R and python. In python, Fitter may be used, and in R, univariateML seems nice to me. You can have a look at some examples of fitting different distributions and finding the best one using univariateML here and here.
How to determine which distribution fits my data best?
Apart from the above-mentioned ways, another approach is to fit as many distributions as you can and estimate their parameters, then compare the AIC and select the best model that fits your data. You
How to determine which distribution fits my data best? Apart from the above-mentioned ways, another approach is to fit as many distributions as you can and estimate their parameters, then compare the AIC and select the best model that fits your data. You dont need to do that on your own, there are several packages available in R and python. In python, Fitter may be used, and in R, univariateML seems nice to me. You can have a look at some examples of fitting different distributions and finding the best one using univariateML here and here.
How to determine which distribution fits my data best? Apart from the above-mentioned ways, another approach is to fit as many distributions as you can and estimate their parameters, then compare the AIC and select the best model that fits your data. You
723
Why the sudden fascination with tensors?
This is not an answer to your question, but an extended comment on the issue that has been raised here in comments by different people, namely: are machine learning "tensors" the same thing as tensors in mathematics? Now, according to the Cichoki 2014, Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions, and Cichoki et al. 2014, Tensor Decompositions for Signal Processing Applications, A higher-order tensor can be interpreted as a multiway array, [...] A tensor can be thought of as a multi-index numerical array, [...] Tensors (i.e., multi-way arrays) [...] So in machine learning / data processing a tensor appears to be simply defined as a multidimensional numerical array. An example of such a 3D tensor would be $1000$ video frames of $640\times 480$ size. A usual $n\times p$ data matrix is an example of a 2D tensor according to this definition. This is not how tensors are defined in mathematics and physics! A tensor can be defined as a multidimensional array obeying certain transformation laws under the change of coordinates (see Wikipedia or the first sentence in MathWorld article). A better but equivalent definition (see Wikipedia) says that a tensor on vector space $V$ is an element of $V\otimes\ldots\otimes V^*$. Note that this means that, when represented as multidimensional arrays, tensors are of size $p\times p$ or $p\times p\times p$ etc., where $p$ is the dimensionality of $V$. All tensors well-known in physics are like that: inertia tensor in mechanics is $3\times 3$, electromagnetic tensor in special relativity is $4\times 4$, Riemann curvature tensor in general relativity is $4\times 4\times 4\times 4$. Curvature and electromagnetic tensors are actually tensor fields, which are sections of tensor bundles (see e.g. here but it gets technical), but all of that is defined over a vector space $V$. Of course one can construct a tensor product $V\otimes W$ of an $p$-dimensional $V$ and $q$-dimensional $W$ but its elements are usually not called "tensors", as stated e.g. here on Wikipedia: In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of a single vector space $V$ and its dual, as above. One example of a real tensor in statistics would be a covariance matrix. It is $p\times p$ and transforms in a particular way when the coordinate system in the $p$-dimensional feature space $V$ is changed. It is a tensor. But a $n\times p$ data matrix $X$ is not. But can we at least think of $X$ as an element of tensor product $W\otimes V$, where $W$ is $n$-dimensional and $V$ is $p$-dimensional? For concreteness, let rows in $X$ correspond to people (subjects) and columns to some measurements (features). A change of coordinates in $V$ corresponds to linear transformation of features, and this is done in statistics all the time (think of PCA). But a change of coordinates in $W$ does not seem to correspond to anything meaningful (and I urge anybody who has a counter-example to let me know in the comments). So it does not seem that there is anything gained by considering $X$ as an element of $W\otimes V$. And indeed, the common notation is to write $X\in\mathbb R^{n\times p}$, where $R^{n\times p}$ is a set of all $n\times p$ matrices (which, by the way, are defined as rectangular arrays of numbers, without any assumed transformation properties). My conclusion is: (a) machine learning tensors are not math/physics tensors, and (b) it is mostly not useful to see them as elements of tensor products either. Instead, they are multidimensional generalizations of matrices. Unfortunately, there is no established mathematical term for that, so it seems that this new meaning of "tensor" is now here to stay.
Why the sudden fascination with tensors?
This is not an answer to your question, but an extended comment on the issue that has been raised here in comments by different people, namely: are machine learning "tensors" the same thing as tensors
Why the sudden fascination with tensors? This is not an answer to your question, but an extended comment on the issue that has been raised here in comments by different people, namely: are machine learning "tensors" the same thing as tensors in mathematics? Now, according to the Cichoki 2014, Era of Big Data Processing: A New Approach via Tensor Networks and Tensor Decompositions, and Cichoki et al. 2014, Tensor Decompositions for Signal Processing Applications, A higher-order tensor can be interpreted as a multiway array, [...] A tensor can be thought of as a multi-index numerical array, [...] Tensors (i.e., multi-way arrays) [...] So in machine learning / data processing a tensor appears to be simply defined as a multidimensional numerical array. An example of such a 3D tensor would be $1000$ video frames of $640\times 480$ size. A usual $n\times p$ data matrix is an example of a 2D tensor according to this definition. This is not how tensors are defined in mathematics and physics! A tensor can be defined as a multidimensional array obeying certain transformation laws under the change of coordinates (see Wikipedia or the first sentence in MathWorld article). A better but equivalent definition (see Wikipedia) says that a tensor on vector space $V$ is an element of $V\otimes\ldots\otimes V^*$. Note that this means that, when represented as multidimensional arrays, tensors are of size $p\times p$ or $p\times p\times p$ etc., where $p$ is the dimensionality of $V$. All tensors well-known in physics are like that: inertia tensor in mechanics is $3\times 3$, electromagnetic tensor in special relativity is $4\times 4$, Riemann curvature tensor in general relativity is $4\times 4\times 4\times 4$. Curvature and electromagnetic tensors are actually tensor fields, which are sections of tensor bundles (see e.g. here but it gets technical), but all of that is defined over a vector space $V$. Of course one can construct a tensor product $V\otimes W$ of an $p$-dimensional $V$ and $q$-dimensional $W$ but its elements are usually not called "tensors", as stated e.g. here on Wikipedia: In principle, one could define a "tensor" simply to be an element of any tensor product. However, the mathematics literature usually reserves the term tensor for an element of a tensor product of a single vector space $V$ and its dual, as above. One example of a real tensor in statistics would be a covariance matrix. It is $p\times p$ and transforms in a particular way when the coordinate system in the $p$-dimensional feature space $V$ is changed. It is a tensor. But a $n\times p$ data matrix $X$ is not. But can we at least think of $X$ as an element of tensor product $W\otimes V$, where $W$ is $n$-dimensional and $V$ is $p$-dimensional? For concreteness, let rows in $X$ correspond to people (subjects) and columns to some measurements (features). A change of coordinates in $V$ corresponds to linear transformation of features, and this is done in statistics all the time (think of PCA). But a change of coordinates in $W$ does not seem to correspond to anything meaningful (and I urge anybody who has a counter-example to let me know in the comments). So it does not seem that there is anything gained by considering $X$ as an element of $W\otimes V$. And indeed, the common notation is to write $X\in\mathbb R^{n\times p}$, where $R^{n\times p}$ is a set of all $n\times p$ matrices (which, by the way, are defined as rectangular arrays of numbers, without any assumed transformation properties). My conclusion is: (a) machine learning tensors are not math/physics tensors, and (b) it is mostly not useful to see them as elements of tensor products either. Instead, they are multidimensional generalizations of matrices. Unfortunately, there is no established mathematical term for that, so it seems that this new meaning of "tensor" is now here to stay.
Why the sudden fascination with tensors? This is not an answer to your question, but an extended comment on the issue that has been raised here in comments by different people, namely: are machine learning "tensors" the same thing as tensors
724
Why the sudden fascination with tensors?
Tensors often offer more natural representations of data, e.g., consider video, which consists of obviously correlated images over time. You can turn this into a matrix, but it's just not natural or intuitive (what does a factorization of some matrix-representation of video mean?). Tensors are trending for several reasons: our understanding of multilinear algebra is improving rapidly, specifically in various types of factorizations, which in turn helps us to identify new potential applications (e.g., multiway component analysis) software tools are emerging (e.g., Tensorlab) and are being welcomed Big Data applications can often be solved using tensors, for example recommender systems, and Big Data itself is hot increases in computational power, as some tensor operations can be hefty (this is also one of the major reasons why deep learning is so popular now)
Why the sudden fascination with tensors?
Tensors often offer more natural representations of data, e.g., consider video, which consists of obviously correlated images over time. You can turn this into a matrix, but it's just not natural or i
Why the sudden fascination with tensors? Tensors often offer more natural representations of data, e.g., consider video, which consists of obviously correlated images over time. You can turn this into a matrix, but it's just not natural or intuitive (what does a factorization of some matrix-representation of video mean?). Tensors are trending for several reasons: our understanding of multilinear algebra is improving rapidly, specifically in various types of factorizations, which in turn helps us to identify new potential applications (e.g., multiway component analysis) software tools are emerging (e.g., Tensorlab) and are being welcomed Big Data applications can often be solved using tensors, for example recommender systems, and Big Data itself is hot increases in computational power, as some tensor operations can be hefty (this is also one of the major reasons why deep learning is so popular now)
Why the sudden fascination with tensors? Tensors often offer more natural representations of data, e.g., consider video, which consists of obviously correlated images over time. You can turn this into a matrix, but it's just not natural or i
725
Why the sudden fascination with tensors?
I think your question should be matched with an answer that is equally free flowing and open minded as the question itself. So, here they are my two analogies. First, unless you're a pure mathematician, you were probably taught univariate probabilities and statistics first. For instance, most likely your first OLS example was probably on a model like this: $$y_i=a+bx_i+e_i$$ Most likely, you went through deriving the estimates through actually minimizing the sum of least squares: $$TSS=\sum_i(y_i-\bar a-\bar b x_i)^2$$ Then you write the FOCs for parameters and get the solution: $$\frac{\partial TTS}{\partial \bar a}=0$$ Then later you're told that there's an easier way of doing this with vector (matrix) notation: $$y=Xb+e$$ and the TTS becomes: $$TTS=(y-X\bar b)'(y-X\bar b)$$ The FOCs are: $$2X'(y-X\bar b)=0$$ And the solution is $$\bar b=(X'X)^{-1}X'y$$ If you're good at linear algebra, you'll stick to the second approach once you've learned it, because it's actually easier than writing down all the sums in the first approach, especially once you get into multivariate statistics. Hence my analogy is that moving to tensors from matrices is similar to moving from vectors to matrices: if you know tensors some things will look easier this way. Second, where do the tensors come from? I'm not sure about the whole history of this thing, but I learned them in theoretical mechanics. Certainly, we had a course on tensors, but I didn't understand what was the deal with all these fancy ways to swap indices in that math course. It all started to make sense in the context of studying tension forces. So, in physics they also start with a simple example of pressure defined as force per unit area, hence: $$F=p\cdot dS$$ This means you can calculate the force vector $F$ by multiplying the pressure $p$ (scalar) by the unit of area $dS$ (normal vector). That is when we have only one infinite plane surface. In this case there's just one perpendicular force. A large balloon would be good example. However, if you're studying tension inside materials, you are dealing with all possible directions and surfaces. In this case you have forces on any given surface pulling or pushing in all directions, not only perpendicular ones. Some surfaces are torn apart by tangential forces "sideways" etc. So, your equation becomes: $$F=P\cdot dS$$ The force is still a vector $F$ and the surface area is still represented by its normal vector $dS$, but $P$ is a tensor now, not a scalar. Ok, a scalar and a vector are also tensors :) Another place where tensors show up naturally is co-variance or correlation matrices. Just think of this: how to transform one correlation matrix $C_0$ to another one $C_1$? You realize we can't just do it this way: $$C_\theta(i,j)=C_0(i,j)+ \theta(C_1(i,j)-C_0(i,j)),$$ where $\theta\in[0,1]$ because we need to keep all $C_\theta$ positive semi-definite. So, we'd have to find the path $\delta C_\theta$ such that $C_1=C_0+\int_\theta\delta C_\theta$, where $\delta C_\theta$ is a small disturbance to a matrix. There are many different paths, and we could search for the shortest ones. That's how we get into Riemannian geometry, manifolds, and... tensors. UPDATE: what's tensor, anyway? @amoeba and others got into a lively discussion of the meaning of tensor and whether it's the same as an array. So, I thought an example is in order. Say, we go to a bazaar to buy groceries, and there are two merchant dudes, $d_1$ and $d_2$. We noticed that if we pay $x_1$ dollars to $d_1$ and $x_2$ dollars to $d_2$ then $d_1$ sells us $y_1=2x_1-x_2$ pounds of apples, and $d_2$ sells us $y_2=-0.5x_1+2x_2$ oranges. For instance, if we pay both 1 dollar, i.e. $x_1=x_2=1$, then we must get 1 pound of apples and 1.5 of oranges. We can express this relation in the form of a matrix $P$: 2 -1 -0.5 2 Then the merchants produce this much apples and oranges if we pay them $x$ dollars: $$y=Px$$ This works exactly like a matrix by vector multiplication. Now, let's say instead of buying the goods from these merchants separately, we declare that there are two spending bundles we utilize. We either pay both 0.71 dollars, or we pay $d_1$ 0.71 dollars and demand 0.71 dollars from $d_2$ back. Like in the initial case, we go to a bazaar and spend $z_1$ on the bundle one and $z_2$ on the bundle 2. So, let's look at an example where we spend just $z_1=2$ on bundle 1. In this case, the first merchant gets $x_1=1$ dollars, and the second merchant gets the same $x_2=1$. Hence, we must get the same amounts of produce like in the example above, aren't we? Maybe, maybe not. You noticed that $P$ matrix is not diagonal. This indicates that for some reason how much one merchant charges for his produce depends also on how much we paid the other merchant. They must get an idea of how much pay them, maybe through rumors? In this case, if we start buying in bundles they'll know for sure how much we pay each of them, because we declare our bundles to the bazaar. In this case, how do we know that the $P$ matrix should stay the same? Maybe with full information of our payments on the market the pricing formulas would change too! This will change our matrix $P$, and there's no way to say how exactly. This is where we enter tensors. Essentially, with tensors we say that the calculations do not change when we start trading in bundles instead of directly with each merchant. That's the constraint, that will impose transformation rules on $P$, which we'll call a tensor. Particularly we may notice that we have an orthonormal basis $\bar d_1,\bar d_2$, where $d_i$ means a payment of 1 dollar to a merchant $i$ and nothing to the other. We may also notice that the bundles also form an orthonormal basis $\bar d_1',\bar d_2'$, which is also a simple rotation of the first basis by 45 degrees counterclockwise. It's also a PC decomposition of the first basis. hence, we are saying that switching to the bundles is simple a change of coordinates, and it should not change the calculations. Note, that this is an outside constraint that we imposed on the model. It didn't come from pure math properties of matrices. Now, our shopping can be expressed as a vector $x=x_1 \bar d_1+x_2\bar d_2$. The vectors are tensors too, BTW. The tensor is interesting: it can be represented as $$P=\sum_{ij}p_{ij}\bar d_i\bar d_j$$, and the groceries as $y=y_1 \bar d_1+y_2 \bar d_2$. With groceries $y_i$ means pound of produce from the merchant $i$, not the dollars paid. Now, when we changed the coordinates to bundles the tensor equation stays the same: $$y=Pz$$ That's nice, but the payment vectors are now in the different basis: $$z=z_1 \bar d_1'+z_2\bar d_2'$$, while we may keep the produce vectors in the old basis $y=y_1 \bar d_1+y_2 \bar d_2$. The tensor changes too:$$P=\sum_{ij}p_{ij}'\bar d_i'\bar d_j'$$. It's easy to derive how the tensor must be transformed, it's going to be $PA$, where the rotation matrix is defined as $\bar d'=A\bar d$. In our case it's the coefficient of the bundle. We can work out the formulas for tensor transformation, and they'll yield the same result as in the examples with $x_1=x_2=1$ and $z_1=0.71,z_2=0$.
Why the sudden fascination with tensors?
I think your question should be matched with an answer that is equally free flowing and open minded as the question itself. So, here they are my two analogies. First, unless you're a pure mathematicia
Why the sudden fascination with tensors? I think your question should be matched with an answer that is equally free flowing and open minded as the question itself. So, here they are my two analogies. First, unless you're a pure mathematician, you were probably taught univariate probabilities and statistics first. For instance, most likely your first OLS example was probably on a model like this: $$y_i=a+bx_i+e_i$$ Most likely, you went through deriving the estimates through actually minimizing the sum of least squares: $$TSS=\sum_i(y_i-\bar a-\bar b x_i)^2$$ Then you write the FOCs for parameters and get the solution: $$\frac{\partial TTS}{\partial \bar a}=0$$ Then later you're told that there's an easier way of doing this with vector (matrix) notation: $$y=Xb+e$$ and the TTS becomes: $$TTS=(y-X\bar b)'(y-X\bar b)$$ The FOCs are: $$2X'(y-X\bar b)=0$$ And the solution is $$\bar b=(X'X)^{-1}X'y$$ If you're good at linear algebra, you'll stick to the second approach once you've learned it, because it's actually easier than writing down all the sums in the first approach, especially once you get into multivariate statistics. Hence my analogy is that moving to tensors from matrices is similar to moving from vectors to matrices: if you know tensors some things will look easier this way. Second, where do the tensors come from? I'm not sure about the whole history of this thing, but I learned them in theoretical mechanics. Certainly, we had a course on tensors, but I didn't understand what was the deal with all these fancy ways to swap indices in that math course. It all started to make sense in the context of studying tension forces. So, in physics they also start with a simple example of pressure defined as force per unit area, hence: $$F=p\cdot dS$$ This means you can calculate the force vector $F$ by multiplying the pressure $p$ (scalar) by the unit of area $dS$ (normal vector). That is when we have only one infinite plane surface. In this case there's just one perpendicular force. A large balloon would be good example. However, if you're studying tension inside materials, you are dealing with all possible directions and surfaces. In this case you have forces on any given surface pulling or pushing in all directions, not only perpendicular ones. Some surfaces are torn apart by tangential forces "sideways" etc. So, your equation becomes: $$F=P\cdot dS$$ The force is still a vector $F$ and the surface area is still represented by its normal vector $dS$, but $P$ is a tensor now, not a scalar. Ok, a scalar and a vector are also tensors :) Another place where tensors show up naturally is co-variance or correlation matrices. Just think of this: how to transform one correlation matrix $C_0$ to another one $C_1$? You realize we can't just do it this way: $$C_\theta(i,j)=C_0(i,j)+ \theta(C_1(i,j)-C_0(i,j)),$$ where $\theta\in[0,1]$ because we need to keep all $C_\theta$ positive semi-definite. So, we'd have to find the path $\delta C_\theta$ such that $C_1=C_0+\int_\theta\delta C_\theta$, where $\delta C_\theta$ is a small disturbance to a matrix. There are many different paths, and we could search for the shortest ones. That's how we get into Riemannian geometry, manifolds, and... tensors. UPDATE: what's tensor, anyway? @amoeba and others got into a lively discussion of the meaning of tensor and whether it's the same as an array. So, I thought an example is in order. Say, we go to a bazaar to buy groceries, and there are two merchant dudes, $d_1$ and $d_2$. We noticed that if we pay $x_1$ dollars to $d_1$ and $x_2$ dollars to $d_2$ then $d_1$ sells us $y_1=2x_1-x_2$ pounds of apples, and $d_2$ sells us $y_2=-0.5x_1+2x_2$ oranges. For instance, if we pay both 1 dollar, i.e. $x_1=x_2=1$, then we must get 1 pound of apples and 1.5 of oranges. We can express this relation in the form of a matrix $P$: 2 -1 -0.5 2 Then the merchants produce this much apples and oranges if we pay them $x$ dollars: $$y=Px$$ This works exactly like a matrix by vector multiplication. Now, let's say instead of buying the goods from these merchants separately, we declare that there are two spending bundles we utilize. We either pay both 0.71 dollars, or we pay $d_1$ 0.71 dollars and demand 0.71 dollars from $d_2$ back. Like in the initial case, we go to a bazaar and spend $z_1$ on the bundle one and $z_2$ on the bundle 2. So, let's look at an example where we spend just $z_1=2$ on bundle 1. In this case, the first merchant gets $x_1=1$ dollars, and the second merchant gets the same $x_2=1$. Hence, we must get the same amounts of produce like in the example above, aren't we? Maybe, maybe not. You noticed that $P$ matrix is not diagonal. This indicates that for some reason how much one merchant charges for his produce depends also on how much we paid the other merchant. They must get an idea of how much pay them, maybe through rumors? In this case, if we start buying in bundles they'll know for sure how much we pay each of them, because we declare our bundles to the bazaar. In this case, how do we know that the $P$ matrix should stay the same? Maybe with full information of our payments on the market the pricing formulas would change too! This will change our matrix $P$, and there's no way to say how exactly. This is where we enter tensors. Essentially, with tensors we say that the calculations do not change when we start trading in bundles instead of directly with each merchant. That's the constraint, that will impose transformation rules on $P$, which we'll call a tensor. Particularly we may notice that we have an orthonormal basis $\bar d_1,\bar d_2$, where $d_i$ means a payment of 1 dollar to a merchant $i$ and nothing to the other. We may also notice that the bundles also form an orthonormal basis $\bar d_1',\bar d_2'$, which is also a simple rotation of the first basis by 45 degrees counterclockwise. It's also a PC decomposition of the first basis. hence, we are saying that switching to the bundles is simple a change of coordinates, and it should not change the calculations. Note, that this is an outside constraint that we imposed on the model. It didn't come from pure math properties of matrices. Now, our shopping can be expressed as a vector $x=x_1 \bar d_1+x_2\bar d_2$. The vectors are tensors too, BTW. The tensor is interesting: it can be represented as $$P=\sum_{ij}p_{ij}\bar d_i\bar d_j$$, and the groceries as $y=y_1 \bar d_1+y_2 \bar d_2$. With groceries $y_i$ means pound of produce from the merchant $i$, not the dollars paid. Now, when we changed the coordinates to bundles the tensor equation stays the same: $$y=Pz$$ That's nice, but the payment vectors are now in the different basis: $$z=z_1 \bar d_1'+z_2\bar d_2'$$, while we may keep the produce vectors in the old basis $y=y_1 \bar d_1+y_2 \bar d_2$. The tensor changes too:$$P=\sum_{ij}p_{ij}'\bar d_i'\bar d_j'$$. It's easy to derive how the tensor must be transformed, it's going to be $PA$, where the rotation matrix is defined as $\bar d'=A\bar d$. In our case it's the coefficient of the bundle. We can work out the formulas for tensor transformation, and they'll yield the same result as in the examples with $x_1=x_2=1$ and $z_1=0.71,z_2=0$.
Why the sudden fascination with tensors? I think your question should be matched with an answer that is equally free flowing and open minded as the question itself. So, here they are my two analogies. First, unless you're a pure mathematicia
726
Why the sudden fascination with tensors?
As someone who studies and builds neural networks and has repeatedly asked this question, I've come to the conclusion that we borrow useful aspects of tensor notation simply because they make derivation a lot easier and keep our gradients in their native shapes. The tensor chain rule is one of the most elegant derivation tools I have ever seen. Further tensor notations encourage computationally efficient simplifications that are simply nightmarish to find when using common extended versions of vector calculus. In Vector/Matrix calculus for instance there are 4 types of matrix products (Hadamard, Kronecker, Ordinary, and Elementwise) but in tensor calculus there is only one type of multiplication yet it covers all matrix multiplications and more. If you want to be generous, interpret tensor to mean multi-dimensional array that we intend to use tensor based calculus to find derivatives for, not that the objects we are manipulating are tensors. In all honesty we probably call our multi-dimensional arrays tensors because most machine learning experts don't care that much about adhering to the definitions of high level math or physics. The reality is we are just borrowing well developed Einstein Summation Conventions and Calculi which are typically used when describing tensors and don't want to say Einstein summation convention based calculus over and over again. Maybe one day we might develop a new set of notations and conventions that steal only what they need from tensor calculus specifically for analyzing neural networks, but as a young field that takes time.
Why the sudden fascination with tensors?
As someone who studies and builds neural networks and has repeatedly asked this question, I've come to the conclusion that we borrow useful aspects of tensor notation simply because they make derivati
Why the sudden fascination with tensors? As someone who studies and builds neural networks and has repeatedly asked this question, I've come to the conclusion that we borrow useful aspects of tensor notation simply because they make derivation a lot easier and keep our gradients in their native shapes. The tensor chain rule is one of the most elegant derivation tools I have ever seen. Further tensor notations encourage computationally efficient simplifications that are simply nightmarish to find when using common extended versions of vector calculus. In Vector/Matrix calculus for instance there are 4 types of matrix products (Hadamard, Kronecker, Ordinary, and Elementwise) but in tensor calculus there is only one type of multiplication yet it covers all matrix multiplications and more. If you want to be generous, interpret tensor to mean multi-dimensional array that we intend to use tensor based calculus to find derivatives for, not that the objects we are manipulating are tensors. In all honesty we probably call our multi-dimensional arrays tensors because most machine learning experts don't care that much about adhering to the definitions of high level math or physics. The reality is we are just borrowing well developed Einstein Summation Conventions and Calculi which are typically used when describing tensors and don't want to say Einstein summation convention based calculus over and over again. Maybe one day we might develop a new set of notations and conventions that steal only what they need from tensor calculus specifically for analyzing neural networks, but as a young field that takes time.
Why the sudden fascination with tensors? As someone who studies and builds neural networks and has repeatedly asked this question, I've come to the conclusion that we borrow useful aspects of tensor notation simply because they make derivati
727
Why the sudden fascination with tensors?
Now I actually agree with most of the content of the other answers. But I'm going to play Devil's advocate on one point. Again, it will be free flowing, so apologies... Google announced a program called Tensor Flow for deep learning. This made me wonder what was 'tensor' about deep learning, as I couldn't make the connection to the definitions I'd seen. Deep learning models are all about transformation of elements from one space to another. E.g. if we consider two layers of some network you might write co-ordinate $i$ of a transformed variable $y$ as a nonlinear function of the previous layer, using the fancy summation notation: $y_i = \sigma(\beta_i^j x_j)$ Now the idea is to chain together a bunch of such transformations in order to arrive at a useful representation of the original co-ordinates. So, for example, after the last transformation of an image a simple logistic regression will produce excellent classification accuracy; whereas on the raw image it would definitely not. Now, the thing that seems to have been lost from sight is the invariance properties sought in a proper tensor. Particularly when the dimensions of transformed variables may be different from layer to layer. [E.g. some of the stuff I've seen on tensors makes no sense for non square Jacobians - I may be lacking some methods] What has been retained is the notion of transformations of variables, and that certain representations of a vector may be more useful than others for particular tasks. Analogy being whether it makes more sense to tackle a problem in Cartesian or polar co-ordinates. EDIT in response to @Aksakal: The vector can't be perfectly preserved because of the changes in the numbers of coordinates. However, in some sense at least the useful information may be preserved under transformation. For example with PCA we may drop a co-ordinate, so we can't invert the transformation but the dimensionality reduction may be useful nonetheless. If all the successive transformations were invertible, you could map back from the penultimate layer to input space. As it is, I've only seen probabilistic models which enable that (RBMs) by sampling.
Why the sudden fascination with tensors?
Now I actually agree with most of the content of the other answers. But I'm going to play Devil's advocate on one point. Again, it will be free flowing, so apologies... Google announced a program call
Why the sudden fascination with tensors? Now I actually agree with most of the content of the other answers. But I'm going to play Devil's advocate on one point. Again, it will be free flowing, so apologies... Google announced a program called Tensor Flow for deep learning. This made me wonder what was 'tensor' about deep learning, as I couldn't make the connection to the definitions I'd seen. Deep learning models are all about transformation of elements from one space to another. E.g. if we consider two layers of some network you might write co-ordinate $i$ of a transformed variable $y$ as a nonlinear function of the previous layer, using the fancy summation notation: $y_i = \sigma(\beta_i^j x_j)$ Now the idea is to chain together a bunch of such transformations in order to arrive at a useful representation of the original co-ordinates. So, for example, after the last transformation of an image a simple logistic regression will produce excellent classification accuracy; whereas on the raw image it would definitely not. Now, the thing that seems to have been lost from sight is the invariance properties sought in a proper tensor. Particularly when the dimensions of transformed variables may be different from layer to layer. [E.g. some of the stuff I've seen on tensors makes no sense for non square Jacobians - I may be lacking some methods] What has been retained is the notion of transformations of variables, and that certain representations of a vector may be more useful than others for particular tasks. Analogy being whether it makes more sense to tackle a problem in Cartesian or polar co-ordinates. EDIT in response to @Aksakal: The vector can't be perfectly preserved because of the changes in the numbers of coordinates. However, in some sense at least the useful information may be preserved under transformation. For example with PCA we may drop a co-ordinate, so we can't invert the transformation but the dimensionality reduction may be useful nonetheless. If all the successive transformations were invertible, you could map back from the penultimate layer to input space. As it is, I've only seen probabilistic models which enable that (RBMs) by sampling.
Why the sudden fascination with tensors? Now I actually agree with most of the content of the other answers. But I'm going to play Devil's advocate on one point. Again, it will be free flowing, so apologies... Google announced a program call
728
Why the sudden fascination with tensors?
Here is a lightly edited (for context) excerpt from Non-Negative Tensor Factorization with Applications to Statistics and Computer Vision, A. Shashua and T. Hazan which gets to the heart of why at least some people are fascinated with tensors. Any n-dimensional problem can be represented in two dimensional form by concatenating dimensions. Thus for example, the problem of finding a non-negative low rank decomposition of a set of images is a 3-NTF (Non-negative Tensor Factorization), with the images forming the slices of a 3D cube, but can also be represented as an NMF (Non-negative Matrix Factorization) problem by vectorizing the images (images forming columns of a matrix). There are two reasons why a matrix representation of a collection of images would not be appropriate: Spatial redundancy (pixels, not necessarily neighboring, having similar values) is lost in the vectorization thus we would expect a less efficient factorization, and An NMF decomposition is not unique therefore even if there exists a generative model (of local parts) the NMF would not necessarily move in that direction, which has been verified empirically by Chu, M., Diele, F., Plemmons, R., & Ragni, S. "Optimality, computation and interpretation of nonnegative matrix factorizations" SIAM Journal on Matrix Analysis, 2004. For example, invariant parts on the image set would tend to form ghosts in all the factors and contaminate the sparsity effect. An NTF is almost always unique thus we would expect the NTF scheme to move towards the generative model, and specifically not be influenced by invariant parts.
Why the sudden fascination with tensors?
Here is a lightly edited (for context) excerpt from Non-Negative Tensor Factorization with Applications to Statistics and Computer Vision, A. Shashua and T. Hazan which gets to the heart of why at lea
Why the sudden fascination with tensors? Here is a lightly edited (for context) excerpt from Non-Negative Tensor Factorization with Applications to Statistics and Computer Vision, A. Shashua and T. Hazan which gets to the heart of why at least some people are fascinated with tensors. Any n-dimensional problem can be represented in two dimensional form by concatenating dimensions. Thus for example, the problem of finding a non-negative low rank decomposition of a set of images is a 3-NTF (Non-negative Tensor Factorization), with the images forming the slices of a 3D cube, but can also be represented as an NMF (Non-negative Matrix Factorization) problem by vectorizing the images (images forming columns of a matrix). There are two reasons why a matrix representation of a collection of images would not be appropriate: Spatial redundancy (pixels, not necessarily neighboring, having similar values) is lost in the vectorization thus we would expect a less efficient factorization, and An NMF decomposition is not unique therefore even if there exists a generative model (of local parts) the NMF would not necessarily move in that direction, which has been verified empirically by Chu, M., Diele, F., Plemmons, R., & Ragni, S. "Optimality, computation and interpretation of nonnegative matrix factorizations" SIAM Journal on Matrix Analysis, 2004. For example, invariant parts on the image set would tend to form ghosts in all the factors and contaminate the sparsity effect. An NTF is almost always unique thus we would expect the NTF scheme to move towards the generative model, and specifically not be influenced by invariant parts.
Why the sudden fascination with tensors? Here is a lightly edited (for context) excerpt from Non-Negative Tensor Factorization with Applications to Statistics and Computer Vision, A. Shashua and T. Hazan which gets to the heart of why at lea
729
Why the sudden fascination with tensors?
[EDIT] Just discovered the book by Peter McCullagh, Tensor Methods in Statistics. Tensors display interest properties in unknown mixture identification in a signal (or an image), especially around the notion of the Canonical Polyadic (CP) tensor decomposition, see for instance Tensors: a Brief Introduction, P. Comon, 2014. The field is known under the name "blind source separation (BSS)": Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular Value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief introduction. Some uniqueness results have been derived for third-order tensors recently: On the uniqueness of the canonical polyadic decomposition of third-order tensors (part 1, part 2), I. Domanov et al., 2013. Tensor decompositions are nodaways often connected to sparse decompositions, for instance by imposing structure on the decomposition factors (orthogonality, Vandermonde, Hankel), and low rank, to accommodate with non-uniqueness. With an increasing need for incomplete data analysis and determination of complex measurements from sensors arrays, tensors are increasingly used for matrix completion, latent variable analysis and source separation. Additional note: apparently, the Canonical Polyadic decomposition is also equivalent to Waring decomposition of a homogeneous polynomial as a sum of powers of linear forms, with applications in system identification (block structured, parallel Wiener-Hammerstein or nonlinear state-space models).
Why the sudden fascination with tensors?
[EDIT] Just discovered the book by Peter McCullagh, Tensor Methods in Statistics. Tensors display interest properties in unknown mixture identification in a signal (or an image), especially around t
Why the sudden fascination with tensors? [EDIT] Just discovered the book by Peter McCullagh, Tensor Methods in Statistics. Tensors display interest properties in unknown mixture identification in a signal (or an image), especially around the notion of the Canonical Polyadic (CP) tensor decomposition, see for instance Tensors: a Brief Introduction, P. Comon, 2014. The field is known under the name "blind source separation (BSS)": Tensor decompositions are at the core of many Blind Source Separation (BSS) algorithms, either explicitly or implicitly. In particular, the Canonical Polyadic (CP) tensor decomposition plays a central role in identification of underdetermined mixtures. Despite some similarities, CP and Singular Value Decomposition (SVD) are quite different. More generally, tensors and matrices enjoy different properties, as pointed out in this brief introduction. Some uniqueness results have been derived for third-order tensors recently: On the uniqueness of the canonical polyadic decomposition of third-order tensors (part 1, part 2), I. Domanov et al., 2013. Tensor decompositions are nodaways often connected to sparse decompositions, for instance by imposing structure on the decomposition factors (orthogonality, Vandermonde, Hankel), and low rank, to accommodate with non-uniqueness. With an increasing need for incomplete data analysis and determination of complex measurements from sensors arrays, tensors are increasingly used for matrix completion, latent variable analysis and source separation. Additional note: apparently, the Canonical Polyadic decomposition is also equivalent to Waring decomposition of a homogeneous polynomial as a sum of powers of linear forms, with applications in system identification (block structured, parallel Wiener-Hammerstein or nonlinear state-space models).
Why the sudden fascination with tensors? [EDIT] Just discovered the book by Peter McCullagh, Tensor Methods in Statistics. Tensors display interest properties in unknown mixture identification in a signal (or an image), especially around t
730
Why the sudden fascination with tensors?
May I respecfully recommend my book: Kroonenberg, P.M. Applied Multiway Data Analysis and Smilde et al. Multiway Analysis. Applications in the Chemical Sciences (both Wiley). Of interest may also be my article: Kroonenberg, P.M. (2014). History of multiway component analysis and three-way correspondence analysis. In Blasius, J. and Greenacre, M.J. (Eds.). Visualization and verbalization of data (pp. 77–94). New York: Chapman & Hall/CRC. ISBN 9781466589803. These references talk about multway data rather than tensors, but refer to the same research area.
Why the sudden fascination with tensors?
May I respecfully recommend my book: Kroonenberg, P.M. Applied Multiway Data Analysis and Smilde et al. Multiway Analysis. Applications in the Chemical Sciences (both Wiley). Of interest may also be
Why the sudden fascination with tensors? May I respecfully recommend my book: Kroonenberg, P.M. Applied Multiway Data Analysis and Smilde et al. Multiway Analysis. Applications in the Chemical Sciences (both Wiley). Of interest may also be my article: Kroonenberg, P.M. (2014). History of multiway component analysis and three-way correspondence analysis. In Blasius, J. and Greenacre, M.J. (Eds.). Visualization and verbalization of data (pp. 77–94). New York: Chapman & Hall/CRC. ISBN 9781466589803. These references talk about multway data rather than tensors, but refer to the same research area.
Why the sudden fascination with tensors? May I respecfully recommend my book: Kroonenberg, P.M. Applied Multiway Data Analysis and Smilde et al. Multiway Analysis. Applications in the Chemical Sciences (both Wiley). Of interest may also be
731
Why the sudden fascination with tensors?
What the term tensor means depends on the context it is used in: Field Meaning Machine learning Multi-dimensional array (usually numeric)1 2 Maths an algebraic object describing a (multilinear) relationship between sets of algebraic objects The machine learning term is inspired by the fact that in a fixed basis/coordinate system, a tensor can be expressed as a multi-dimensional array. However, there are alternative representations, depending on the sub-field, with varying degrees of abstraction. Tensors, also known as multidimensional arrays, are generalizations of matrices to higher orders and are useful data representation architectures. Tensors in Statistics Annual Review of Statistics and Its Application (2021) Tensor The primary data structure in TensorFlow programs. Tensors are N-dimensional (where N could be very large) data structures, most commonly scalars, vectors, or matrices. The elements of a Tensor can hold integer, floating-point, or string values.
Why the sudden fascination with tensors?
What the term tensor means depends on the context it is used in: Field Meaning Machine learning Multi-dimensional array (usually numeric)1 2 Maths an algebraic object describing a (multiline
Why the sudden fascination with tensors? What the term tensor means depends on the context it is used in: Field Meaning Machine learning Multi-dimensional array (usually numeric)1 2 Maths an algebraic object describing a (multilinear) relationship between sets of algebraic objects The machine learning term is inspired by the fact that in a fixed basis/coordinate system, a tensor can be expressed as a multi-dimensional array. However, there are alternative representations, depending on the sub-field, with varying degrees of abstraction. Tensors, also known as multidimensional arrays, are generalizations of matrices to higher orders and are useful data representation architectures. Tensors in Statistics Annual Review of Statistics and Its Application (2021) Tensor The primary data structure in TensorFlow programs. Tensors are N-dimensional (where N could be very large) data structures, most commonly scalars, vectors, or matrices. The elements of a Tensor can hold integer, floating-point, or string values.
Why the sudden fascination with tensors? What the term tensor means depends on the context it is used in: Field Meaning Machine learning Multi-dimensional array (usually numeric)1 2 Maths an algebraic object describing a (multiline
732
Why the sudden fascination with tensors?
It is true that people in Machine Learning do not view tensors with the same care as mathematicians and physicians. Here is a paper that may clarify this discrepancy: Comon P., "Tensors: a brief introduction" IEEE Sig. Proc. Magazine, 31, May 2014
Why the sudden fascination with tensors?
It is true that people in Machine Learning do not view tensors with the same care as mathematicians and physicians. Here is a paper that may clarify this discrepancy: Comon P., "Tensors: a brief intro
Why the sudden fascination with tensors? It is true that people in Machine Learning do not view tensors with the same care as mathematicians and physicians. Here is a paper that may clarify this discrepancy: Comon P., "Tensors: a brief introduction" IEEE Sig. Proc. Magazine, 31, May 2014
Why the sudden fascination with tensors? It is true that people in Machine Learning do not view tensors with the same care as mathematicians and physicians. Here is a paper that may clarify this discrepancy: Comon P., "Tensors: a brief intro
733
What is the difference between a consistent estimator and an unbiased estimator?
To define the two terms without using too much technical language: An estimator is consistent if, as the sample size increases, the estimates (produced by the estimator) "converge" to the true value of the parameter being estimated. To be slightly more precise - consistency means that, as the sample size increases, the sampling distribution of the estimator becomes increasingly concentrated at the true parameter value. An estimator is unbiased if, on average, it hits the true parameter value. That is, the mean of the sampling distribution of the estimator is equal to the true parameter value. The two are not equivalent: Unbiasedness is a statement about the expected value of the sampling distribution of the estimator. Consistency is a statement about "where the sampling distribution of the estimator is going" as the sample size increases. It certainly is possible for one condition to be satisfied but not the other - I will give two examples. For both examples consider a sample $X_1, ..., X_n$ from a $N(\mu, \sigma^2)$ population. Unbiased but not consistent: Suppose you're estimating $\mu$. Then $X_1$ is an unbiased estimator of $\mu$ since $E(X_1) = \mu$. But, $X_1$ is not consistent since its distribution does not become more concentrated around $\mu$ as the sample size increases - it's always $N(\mu, \sigma^2)$! Consistent but not unbiased: Suppose you're estimating $\sigma^2$. The maximum likelihood estimator is $$ \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^{n} (X_i - \overline{X})^2 $$ where $\overline{X}$ is the sample mean. It is a fact that $$ E(\hat{\sigma}^2) = \frac{n-1}{n} \sigma^2 $$ which can be derived using the information here. Therefore $\hat{\sigma}^2$ is biased for any finite sample size. We can also easily derive that $${\rm var}(\hat{\sigma}^2) = \frac{ 2\sigma^4(n-1)}{n^2}$$ From these facts we can informally see that the distribution of $\hat{\sigma}^2$ is becoming more and more concentrated at $\sigma^2$ as the sample size increases since the mean is converging to $\sigma^2$ and the variance is converging to $0$. (Note: This does constitute a proof of consistency, using the same argument as the one used in the answer here)
What is the difference between a consistent estimator and an unbiased estimator?
To define the two terms without using too much technical language: An estimator is consistent if, as the sample size increases, the estimates (produced by the estimator) "converge" to the true value
What is the difference between a consistent estimator and an unbiased estimator? To define the two terms without using too much technical language: An estimator is consistent if, as the sample size increases, the estimates (produced by the estimator) "converge" to the true value of the parameter being estimated. To be slightly more precise - consistency means that, as the sample size increases, the sampling distribution of the estimator becomes increasingly concentrated at the true parameter value. An estimator is unbiased if, on average, it hits the true parameter value. That is, the mean of the sampling distribution of the estimator is equal to the true parameter value. The two are not equivalent: Unbiasedness is a statement about the expected value of the sampling distribution of the estimator. Consistency is a statement about "where the sampling distribution of the estimator is going" as the sample size increases. It certainly is possible for one condition to be satisfied but not the other - I will give two examples. For both examples consider a sample $X_1, ..., X_n$ from a $N(\mu, \sigma^2)$ population. Unbiased but not consistent: Suppose you're estimating $\mu$. Then $X_1$ is an unbiased estimator of $\mu$ since $E(X_1) = \mu$. But, $X_1$ is not consistent since its distribution does not become more concentrated around $\mu$ as the sample size increases - it's always $N(\mu, \sigma^2)$! Consistent but not unbiased: Suppose you're estimating $\sigma^2$. The maximum likelihood estimator is $$ \hat{\sigma}^2 = \frac{1}{n} \sum_{i=1}^{n} (X_i - \overline{X})^2 $$ where $\overline{X}$ is the sample mean. It is a fact that $$ E(\hat{\sigma}^2) = \frac{n-1}{n} \sigma^2 $$ which can be derived using the information here. Therefore $\hat{\sigma}^2$ is biased for any finite sample size. We can also easily derive that $${\rm var}(\hat{\sigma}^2) = \frac{ 2\sigma^4(n-1)}{n^2}$$ From these facts we can informally see that the distribution of $\hat{\sigma}^2$ is becoming more and more concentrated at $\sigma^2$ as the sample size increases since the mean is converging to $\sigma^2$ and the variance is converging to $0$. (Note: This does constitute a proof of consistency, using the same argument as the one used in the answer here)
What is the difference between a consistent estimator and an unbiased estimator? To define the two terms without using too much technical language: An estimator is consistent if, as the sample size increases, the estimates (produced by the estimator) "converge" to the true value
734
What is the difference between a consistent estimator and an unbiased estimator?
Consistency of an estimator means that as the sample size gets large the estimate gets closer and closer to the true value of the parameter. Unbiasedness is a finite sample property that is not affected by increasing sample size. An estimate is unbiased if its expected value equals the true parameter value. This will be true for all sample sizes and is exact whereas consistency is asymptotic and only is approximately equal and not exact. To say that an estimator is unbiased means that if you took many samples of size $n$ and computed the estimate each time the average of all these estimates would be close to the true parameter value and will get closer as the number of times you do this increases. The sample mean is both consistent and unbiased. The sample estimate of standard deviation is biased but consistent. Update following the discussion in the comments with @cardinal and @Macro: As described below there are apparently pathological cases where the variance does not have to go to 0 for the estimator to be strongly consistent and the bias doesn't even have to go to 0 either.
What is the difference between a consistent estimator and an unbiased estimator?
Consistency of an estimator means that as the sample size gets large the estimate gets closer and closer to the true value of the parameter. Unbiasedness is a finite sample property that is not affec
What is the difference between a consistent estimator and an unbiased estimator? Consistency of an estimator means that as the sample size gets large the estimate gets closer and closer to the true value of the parameter. Unbiasedness is a finite sample property that is not affected by increasing sample size. An estimate is unbiased if its expected value equals the true parameter value. This will be true for all sample sizes and is exact whereas consistency is asymptotic and only is approximately equal and not exact. To say that an estimator is unbiased means that if you took many samples of size $n$ and computed the estimate each time the average of all these estimates would be close to the true parameter value and will get closer as the number of times you do this increases. The sample mean is both consistent and unbiased. The sample estimate of standard deviation is biased but consistent. Update following the discussion in the comments with @cardinal and @Macro: As described below there are apparently pathological cases where the variance does not have to go to 0 for the estimator to be strongly consistent and the bias doesn't even have to go to 0 either.
What is the difference between a consistent estimator and an unbiased estimator? Consistency of an estimator means that as the sample size gets large the estimate gets closer and closer to the true value of the parameter. Unbiasedness is a finite sample property that is not affec
735
What is the difference between a consistent estimator and an unbiased estimator?
If we take a sample of size $n$ and calculate the difference between the estimator and the true parameter, this gives a random variable for each $n$. If we take the sequence of these random variables as $n$ increases, consistency means the both the mean and the variance go to zero as $n$ goes to infinity. Unbiased means that this random variable for a particular $n$ has mean zero. So one difference is that bias is a property for a particular $n$, while consistency refer to the behavior as $n$ goes to infinity. Since Another difference is that bias has to do just with the mean (an unbiased estimator can be wildly wrong, as long as the errors cancel out on average), while consistency also says something about the variance. An estimator can be unbiased for all $n$ but inconsistent if the variance doesn't go to zero, and it can be consistent but biased for all $n$ if the bias for each $n$ is nonzero, but going to zero. For instance, if the bias is $\frac 1 n$, the bias is going to zero, but it isn't ever equal to zero; a sequence can have a limit that it doesn't ever actually equal.
What is the difference between a consistent estimator and an unbiased estimator?
If we take a sample of size $n$ and calculate the difference between the estimator and the true parameter, this gives a random variable for each $n$. If we take the sequence of these random variables
What is the difference between a consistent estimator and an unbiased estimator? If we take a sample of size $n$ and calculate the difference between the estimator and the true parameter, this gives a random variable for each $n$. If we take the sequence of these random variables as $n$ increases, consistency means the both the mean and the variance go to zero as $n$ goes to infinity. Unbiased means that this random variable for a particular $n$ has mean zero. So one difference is that bias is a property for a particular $n$, while consistency refer to the behavior as $n$ goes to infinity. Since Another difference is that bias has to do just with the mean (an unbiased estimator can be wildly wrong, as long as the errors cancel out on average), while consistency also says something about the variance. An estimator can be unbiased for all $n$ but inconsistent if the variance doesn't go to zero, and it can be consistent but biased for all $n$ if the bias for each $n$ is nonzero, but going to zero. For instance, if the bias is $\frac 1 n$, the bias is going to zero, but it isn't ever equal to zero; a sequence can have a limit that it doesn't ever actually equal.
What is the difference between a consistent estimator and an unbiased estimator? If we take a sample of size $n$ and calculate the difference between the estimator and the true parameter, this gives a random variable for each $n$. If we take the sequence of these random variables
736
Training on the full dataset after cross-validation?
The way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model. If you use cross-validation to estimate the hyperparameters of a model (the $\alpha$s) and then use those hyper-parameters to fit a model to the whole dataset, then that is fine, provided that you recognise that the cross-validation estimate of performance is likely to be (possibly substantially) optimistically biased. This is because part of the model (the hyper-parameters) have been selected to minimise the cross-validation performance, so if the cross-validation statistic has a non-zero variance (and it will) there is the possibility of over-fitting the model selection criterion. If you want to choose the hyper-parameters and estimate the performance of the resulting model then you need to perform a nested cross-validation, where the outer cross-validation is used to assess the performance of the model, and in each fold cross-validation is used to determine the hyper-parameters separately in each fold. You build the final model by using cross-validation on the whole set to choose the hyper-parameters and then build the classifier on the whole dataset using the optimized hyper-parameters. This is of course computationally expensive, but worth it as the bias introduced by improper performance estimation can be large. See my paper G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (www, pdf) However, it is still possible to have over-fitting in model selection (nested cross-validation just allows you to test for it). A method I have found useful is to add a regularisation term to the cross-validation error that penalises hyper-parameter values likely to result in overly-complex models, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (www,pdf) So the answers to your question are (i) yes, you should use the full dataset to produce your final model as the more data you use the more likely it is to generalise well but (ii) make sure you obtain an unbiased performance estimate via nested cross-validation and potentially consider penalising the cross-validation statistic to further avoid over-fitting in model selection.
Training on the full dataset after cross-validation?
The way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model. If you use cross-validation t
Training on the full dataset after cross-validation? The way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model. If you use cross-validation to estimate the hyperparameters of a model (the $\alpha$s) and then use those hyper-parameters to fit a model to the whole dataset, then that is fine, provided that you recognise that the cross-validation estimate of performance is likely to be (possibly substantially) optimistically biased. This is because part of the model (the hyper-parameters) have been selected to minimise the cross-validation performance, so if the cross-validation statistic has a non-zero variance (and it will) there is the possibility of over-fitting the model selection criterion. If you want to choose the hyper-parameters and estimate the performance of the resulting model then you need to perform a nested cross-validation, where the outer cross-validation is used to assess the performance of the model, and in each fold cross-validation is used to determine the hyper-parameters separately in each fold. You build the final model by using cross-validation on the whole set to choose the hyper-parameters and then build the classifier on the whole dataset using the optimized hyper-parameters. This is of course computationally expensive, but worth it as the bias introduced by improper performance estimation can be large. See my paper G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (www, pdf) However, it is still possible to have over-fitting in model selection (nested cross-validation just allows you to test for it). A method I have found useful is to add a regularisation term to the cross-validation error that penalises hyper-parameter values likely to result in overly-complex models, see G. C. Cawley and N. L. C. Talbot, Preventing over-fitting in model selection via Bayesian regularisation of the hyper-parameters, Journal of Machine Learning Research, volume 8, pages 841-861, April 2007. (www,pdf) So the answers to your question are (i) yes, you should use the full dataset to produce your final model as the more data you use the more likely it is to generalise well but (ii) make sure you obtain an unbiased performance estimate via nested cross-validation and potentially consider penalising the cross-validation statistic to further avoid over-fitting in model selection.
Training on the full dataset after cross-validation? The way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model. If you use cross-validation t
737
Training on the full dataset after cross-validation?
Just to add to the answer by @mark999, Max Kuhn's caret package (Classification and Regression Training) is the most comprehensive source in R for model selection based on bootstrap cross validation or N-fold CV and some other schemes as well. Not to disregard the greatness of the rms package, but caret lets you fit pretty much every learning method available in R, whereas validate only works with rms methods (I think). The caret package is a single infrastructure to pre process data, fit and evaluate any popular model, hence it is simple to use for all methods and provides graphical assessment of many performance measures (something that next to the overfit problem might influence the model selection considerably as well) over your grid and variable importance. See the package vignettes to get started (it is very simple to use) Data Preprocessing Variable Selection with caret Model Building with caret Variable Importance You may also view the caret website for more information on the package, and specific implementation examples: Offical caret website
Training on the full dataset after cross-validation?
Just to add to the answer by @mark999, Max Kuhn's caret package (Classification and Regression Training) is the most comprehensive source in R for model selection based on bootstrap cross validation o
Training on the full dataset after cross-validation? Just to add to the answer by @mark999, Max Kuhn's caret package (Classification and Regression Training) is the most comprehensive source in R for model selection based on bootstrap cross validation or N-fold CV and some other schemes as well. Not to disregard the greatness of the rms package, but caret lets you fit pretty much every learning method available in R, whereas validate only works with rms methods (I think). The caret package is a single infrastructure to pre process data, fit and evaluate any popular model, hence it is simple to use for all methods and provides graphical assessment of many performance measures (something that next to the overfit problem might influence the model selection considerably as well) over your grid and variable importance. See the package vignettes to get started (it is very simple to use) Data Preprocessing Variable Selection with caret Model Building with caret Variable Importance You may also view the caret website for more information on the package, and specific implementation examples: Offical caret website
Training on the full dataset after cross-validation? Just to add to the answer by @mark999, Max Kuhn's caret package (Classification and Regression Training) is the most comprehensive source in R for model selection based on bootstrap cross validation o
738
Training on the full dataset after cross-validation?
I believe that Frank Harrell would recommend bootstrap validation rather than cross validation. Bootstrap validation would allow you to validate the model fitted on the full data set, and is more stable than cross validation. You can do it in R using validate in Harrell's rms package. See the book "Regression Modeling Strategies" by Harrell and/or "An Introduction to the Bootstrap" by Efron and Tibshirani for more information.
Training on the full dataset after cross-validation?
I believe that Frank Harrell would recommend bootstrap validation rather than cross validation. Bootstrap validation would allow you to validate the model fitted on the full data set, and is more stab
Training on the full dataset after cross-validation? I believe that Frank Harrell would recommend bootstrap validation rather than cross validation. Bootstrap validation would allow you to validate the model fitted on the full data set, and is more stable than cross validation. You can do it in R using validate in Harrell's rms package. See the book "Regression Modeling Strategies" by Harrell and/or "An Introduction to the Bootstrap" by Efron and Tibshirani for more information.
Training on the full dataset after cross-validation? I believe that Frank Harrell would recommend bootstrap validation rather than cross validation. Bootstrap validation would allow you to validate the model fitted on the full data set, and is more stab
739
Training on the full dataset after cross-validation?
I think you have a bunch of different questions here: The problem is that, if I use all points in my dataset for training, I can't check if this new learned model βfull overfits! The thing is, you can use (one) validation step only for one thing: either for parameter optimization, (x)or for estimating generalization performance. So, if you do parameter optimization by cross validation (or any other kind of data-driven parameter determination), you need test samples that are independent of those training and optimization samples. Dikran calls it nested cross validation, another name is double cross validation. Or, of course, an independent test set. So here's the question for this post : Is it a good idea to train with the full dataset after k-fold cross-validation? Or is it better instead to stick with one of the models learned in one of the cross-validation splits for αbest? Using one of the cross validation models usually is worse than training on the full set (at least if your learning curve performance = f (nsamples) is still increasing. In practice, it is: if it wasn't, you would probably have set aside an independent test set.) If you observe a large variation between the cross validation models (with the same parameters), then your models are unstable. In that case, aggregating the models can help and actually be better than using the one model trained on the whole data. Update: This aggregation is the idea behind bagging applied to resampling without replacement (cross validation) instead of to resampling with replacement (bootstrap / out-of-bootstrap validation). Here's a paper where we used this technique: Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 Perhaps most importantly, how can I train with all points in my dataset and still fight overfitting? By being very conservative with the degrees of freedom allowed for the "best" model, i.e. by taking into account the (random) uncertainty on the optimization cross validation results. If the d.f. are actually appropriate for the cross validation models, chances are good that they are not too many for the larger training set. The pitfall is that parameter optimization is actually multiple testing. You need to guard against accidentally good looking parameter sets.
Training on the full dataset after cross-validation?
I think you have a bunch of different questions here: The problem is that, if I use all points in my dataset for training, I can't check if this new learned model βfull overfits! The thing is, you c
Training on the full dataset after cross-validation? I think you have a bunch of different questions here: The problem is that, if I use all points in my dataset for training, I can't check if this new learned model βfull overfits! The thing is, you can use (one) validation step only for one thing: either for parameter optimization, (x)or for estimating generalization performance. So, if you do parameter optimization by cross validation (or any other kind of data-driven parameter determination), you need test samples that are independent of those training and optimization samples. Dikran calls it nested cross validation, another name is double cross validation. Or, of course, an independent test set. So here's the question for this post : Is it a good idea to train with the full dataset after k-fold cross-validation? Or is it better instead to stick with one of the models learned in one of the cross-validation splits for αbest? Using one of the cross validation models usually is worse than training on the full set (at least if your learning curve performance = f (nsamples) is still increasing. In practice, it is: if it wasn't, you would probably have set aside an independent test set.) If you observe a large variation between the cross validation models (with the same parameters), then your models are unstable. In that case, aggregating the models can help and actually be better than using the one model trained on the whole data. Update: This aggregation is the idea behind bagging applied to resampling without replacement (cross validation) instead of to resampling with replacement (bootstrap / out-of-bootstrap validation). Here's a paper where we used this technique: Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 Perhaps most importantly, how can I train with all points in my dataset and still fight overfitting? By being very conservative with the degrees of freedom allowed for the "best" model, i.e. by taking into account the (random) uncertainty on the optimization cross validation results. If the d.f. are actually appropriate for the cross validation models, chances are good that they are not too many for the larger training set. The pitfall is that parameter optimization is actually multiple testing. You need to guard against accidentally good looking parameter sets.
Training on the full dataset after cross-validation? I think you have a bunch of different questions here: The problem is that, if I use all points in my dataset for training, I can't check if this new learned model βfull overfits! The thing is, you c
740
Training on the full dataset after cross-validation?
What you do is not a cross validation, rather some kind of stochastic optimization. The idea of CV is to simulate a performance on unseen data by performing several rounds of building the model on a subset of objects and testing on the remaining ones. The somewhat averaged results of all rounds are the approximation of performance of a model trained on the whole set. In your case of model selection, you should perform a full CV for each parameter set, and thus get a on-full-set performance approximation for each setup, so seemingly the thing you wanted to have. However, note that it is not at all guaranteed that the model with best approximated accuracy will be the best in fact -- you may cross-validate the whole model selection procedure to see that there exist some range in parameter space for which the differences in model accuracies are not significant.
Training on the full dataset after cross-validation?
What you do is not a cross validation, rather some kind of stochastic optimization. The idea of CV is to simulate a performance on unseen data by performing several rounds of building the model on a
Training on the full dataset after cross-validation? What you do is not a cross validation, rather some kind of stochastic optimization. The idea of CV is to simulate a performance on unseen data by performing several rounds of building the model on a subset of objects and testing on the remaining ones. The somewhat averaged results of all rounds are the approximation of performance of a model trained on the whole set. In your case of model selection, you should perform a full CV for each parameter set, and thus get a on-full-set performance approximation for each setup, so seemingly the thing you wanted to have. However, note that it is not at all guaranteed that the model with best approximated accuracy will be the best in fact -- you may cross-validate the whole model selection procedure to see that there exist some range in parameter space for which the differences in model accuracies are not significant.
Training on the full dataset after cross-validation? What you do is not a cross validation, rather some kind of stochastic optimization. The idea of CV is to simulate a performance on unseen data by performing several rounds of building the model on a
741
Why do we need sigma-algebras to define probability spaces?
To Xi'an's first point: When you're talking about $\sigma$-algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. I'll try to build up to that gently, though. A theory of probability admitting all subsets of uncountable sets will break mathematics Consider this example. Suppose you have a unit square in $\mathbb{R}^2$, and you're interested in the probability of randomly selecting a point that is a member of a specific set in the unit square. In lots of circumstances, this can be readily answered based on a comparison of areas of the different sets. For example, we can draw some circles, measure their areas, and then take the probability as the fraction of the square falling in the circle. Very simple. But what if the area of the set of interest is not well-defined? If the area is not well-defined, then we can reason to two different but completely valid (in some sense) conclusions about what the area is. So we could have $P(A)=1$ on the one hand and $P(A)=0$ on the other hand, which implies $0=1$. This breaks all of math beyond repair. You can now prove $5<0$ and a number of other preposterous things. Clearly this isn't too useful. $\boldsymbol{\sigma}$-algebras are the patch that fixes math What is a $\sigma$-algebra, precisely? It's actually not that frightening. It's just a definition of which sets may be considered as events. Elements not in $\mathscr{F}$ simply have no defined probability measure. Basically, $\sigma$-algebras are the "patch" that lets us avoid some pathological behaviors of mathematics, namely non-measurable sets. The three requirements of a $\sigma$-field can be considered as consequences of what we would like to do with probability: A $\sigma$-field is a set that has three properties: Closure under countable unions. Closure under countable intersections. Closure under complements. The countable unions and countable intersections components are direct consequences of the non-measurable set issue. Closure under complements is a consequence of the Kolmogorov axioms: if $P(A)=2/3$, $P(A^c)$ ought to be $1/3$. But without (3), it could happen that $P(A^c)$ is undefined. That would be strange. Closure under complements and the Kolmogorov axioms let us to say things like $P(A\cup A^c)=P(A)+1-P(A)=1$. Finally, We are considering events in relation to $\Omega$, so we further require that $\Omega\in\mathscr{F}$ Good news: $\boldsymbol{\sigma}$-algebras are only strictly necessary for uncountable sets But! There's good news here, also. Or, at least, a way to skirt the issue. We only need $\sigma$-algebras if we're working in a set with uncountable cardinality. If we restrict ourselves to countable sets, then we can take $\mathscr{F}=2^\Omega$ the power set of $\Omega$ and we won't have any of these problems because for countable $\Omega$, $2^\Omega$ consists only of measurable sets. (This is alluded to in Xi'an's second comment.) You'll notice that some textbooks will actually commit a subtle sleight-of-hand here, and only consider countable sets when discussing probability spaces. Additionally, in geometric problems in $\mathbb{R}^n$, it's perfectly sufficient to only consider $\sigma$-algebras composed of sets for which the $\mathcal{L}^n$ measure is defined. To ground this somewhat more firmly, $\mathcal{L}^n$ for $n=1,2,3$ corresponds to the usual notions of length, area and volume. So what I'm saying in the previous example is that the set needs to have a well-defined area for it to have a geometric probability assigned to it. And the reason is this: if we admit non-measureable sets, then we can end up in situations where we can assign probability 1 to some event based on some proof, and probability 0 to the same event event based on some other proof. But don't let the connection to uncountable sets confuse you! A common misconception that $\sigma$-algebras are countable sets. In fact, they may be countable or uncountable. Consider this illustration: as before, we have a unit square. Define $$\mathscr{F}=\text{All subsets of the unit square with defined $\mathcal{L}^2$ measure}.$$ You can draw a square $B$ with side length $s$ for all $s \in (0,1)$, and with one corner at $(0,0)$. It should be clear that this square is a subset of the unit square. Moreover, all of these squares have defined area, so these squares are elements of $\mathscr{F}$. But it should also be clear that there are uncountably many squares $B$: the number of such squares is uncountable, and each square has defined Lebesgue measure. So as a practical matter, simply making that observation is often enough to make the observation that you only consider Lebesgue-measurable sets to gain headway against the problem of interest. But wait, what's a non-measurable set? I'm afraid I can only shed a little bit of light on this myself. But the Banach-Tarski paradox (sometimes the "sun and pea" paradox) can help us some: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their shape. However, the pieces themselves are not "solids" in the usual sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces. A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball), either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox".1 So if you're working with probabilities in $\mathbb{R}^3$ and you're using the geometric probability measure (the ratio of volumes), you want to work out the probability of some event. But you'll struggle to define that probability precisely, because you can rearrange the sets of your space to change volumes! If probability depends on volume, and you can change the volume of the set to be the size of the sun or the size of a pea, then the probability will also change. So no event will have a single probability ascribed to it. Even worse, you can rearrange $S\in\Omega$ such that the volume of $S$ has $V(S)>V(\Omega)$, which implies that the geometric probability measure reports a probability $P(S)>1$, in flagrant violation of the Kolmogorov axioms which require that probability has measure 1. To resolve this paradox, one could make one of four concessions: The volume of a set might change when it is rotated. The volume of the union of two disjoint sets might be different from the sum of their volumes. The axioms of Zermelo–Fraenkel set theory with the axiom of Choice (ZFC) might have to be altered. Some sets might be tagged "non-measurable", and one would need to check whether a set is "measurable" before talking about its volume. Option (1) doesn't help use define probabilities, so it's out. Option (2) violates the second Kolmogorov axiom, so it's out. Option (3) seems like a terrible idea because ZFC fixes so many more problems than it creates. But option (4) seems attractive: if we develop a theory of what is and is not measurable, then we will have well-defined probabilities in this problem! This brings us back to measure theory, and our friend the $\sigma$-algebra.
Why do we need sigma-algebras to define probability spaces?
To Xi'an's first point: When you're talking about $\sigma$-algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. I'll try to build up to that gently,
Why do we need sigma-algebras to define probability spaces? To Xi'an's first point: When you're talking about $\sigma$-algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. I'll try to build up to that gently, though. A theory of probability admitting all subsets of uncountable sets will break mathematics Consider this example. Suppose you have a unit square in $\mathbb{R}^2$, and you're interested in the probability of randomly selecting a point that is a member of a specific set in the unit square. In lots of circumstances, this can be readily answered based on a comparison of areas of the different sets. For example, we can draw some circles, measure their areas, and then take the probability as the fraction of the square falling in the circle. Very simple. But what if the area of the set of interest is not well-defined? If the area is not well-defined, then we can reason to two different but completely valid (in some sense) conclusions about what the area is. So we could have $P(A)=1$ on the one hand and $P(A)=0$ on the other hand, which implies $0=1$. This breaks all of math beyond repair. You can now prove $5<0$ and a number of other preposterous things. Clearly this isn't too useful. $\boldsymbol{\sigma}$-algebras are the patch that fixes math What is a $\sigma$-algebra, precisely? It's actually not that frightening. It's just a definition of which sets may be considered as events. Elements not in $\mathscr{F}$ simply have no defined probability measure. Basically, $\sigma$-algebras are the "patch" that lets us avoid some pathological behaviors of mathematics, namely non-measurable sets. The three requirements of a $\sigma$-field can be considered as consequences of what we would like to do with probability: A $\sigma$-field is a set that has three properties: Closure under countable unions. Closure under countable intersections. Closure under complements. The countable unions and countable intersections components are direct consequences of the non-measurable set issue. Closure under complements is a consequence of the Kolmogorov axioms: if $P(A)=2/3$, $P(A^c)$ ought to be $1/3$. But without (3), it could happen that $P(A^c)$ is undefined. That would be strange. Closure under complements and the Kolmogorov axioms let us to say things like $P(A\cup A^c)=P(A)+1-P(A)=1$. Finally, We are considering events in relation to $\Omega$, so we further require that $\Omega\in\mathscr{F}$ Good news: $\boldsymbol{\sigma}$-algebras are only strictly necessary for uncountable sets But! There's good news here, also. Or, at least, a way to skirt the issue. We only need $\sigma$-algebras if we're working in a set with uncountable cardinality. If we restrict ourselves to countable sets, then we can take $\mathscr{F}=2^\Omega$ the power set of $\Omega$ and we won't have any of these problems because for countable $\Omega$, $2^\Omega$ consists only of measurable sets. (This is alluded to in Xi'an's second comment.) You'll notice that some textbooks will actually commit a subtle sleight-of-hand here, and only consider countable sets when discussing probability spaces. Additionally, in geometric problems in $\mathbb{R}^n$, it's perfectly sufficient to only consider $\sigma$-algebras composed of sets for which the $\mathcal{L}^n$ measure is defined. To ground this somewhat more firmly, $\mathcal{L}^n$ for $n=1,2,3$ corresponds to the usual notions of length, area and volume. So what I'm saying in the previous example is that the set needs to have a well-defined area for it to have a geometric probability assigned to it. And the reason is this: if we admit non-measureable sets, then we can end up in situations where we can assign probability 1 to some event based on some proof, and probability 0 to the same event event based on some other proof. But don't let the connection to uncountable sets confuse you! A common misconception that $\sigma$-algebras are countable sets. In fact, they may be countable or uncountable. Consider this illustration: as before, we have a unit square. Define $$\mathscr{F}=\text{All subsets of the unit square with defined $\mathcal{L}^2$ measure}.$$ You can draw a square $B$ with side length $s$ for all $s \in (0,1)$, and with one corner at $(0,0)$. It should be clear that this square is a subset of the unit square. Moreover, all of these squares have defined area, so these squares are elements of $\mathscr{F}$. But it should also be clear that there are uncountably many squares $B$: the number of such squares is uncountable, and each square has defined Lebesgue measure. So as a practical matter, simply making that observation is often enough to make the observation that you only consider Lebesgue-measurable sets to gain headway against the problem of interest. But wait, what's a non-measurable set? I'm afraid I can only shed a little bit of light on this myself. But the Banach-Tarski paradox (sometimes the "sun and pea" paradox) can help us some: Given a solid ball in 3‑dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their shape. However, the pieces themselves are not "solids" in the usual sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces. A stronger form of the theorem implies that given any two "reasonable" solid objects (such as a small ball and a huge ball), either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox".1 So if you're working with probabilities in $\mathbb{R}^3$ and you're using the geometric probability measure (the ratio of volumes), you want to work out the probability of some event. But you'll struggle to define that probability precisely, because you can rearrange the sets of your space to change volumes! If probability depends on volume, and you can change the volume of the set to be the size of the sun or the size of a pea, then the probability will also change. So no event will have a single probability ascribed to it. Even worse, you can rearrange $S\in\Omega$ such that the volume of $S$ has $V(S)>V(\Omega)$, which implies that the geometric probability measure reports a probability $P(S)>1$, in flagrant violation of the Kolmogorov axioms which require that probability has measure 1. To resolve this paradox, one could make one of four concessions: The volume of a set might change when it is rotated. The volume of the union of two disjoint sets might be different from the sum of their volumes. The axioms of Zermelo–Fraenkel set theory with the axiom of Choice (ZFC) might have to be altered. Some sets might be tagged "non-measurable", and one would need to check whether a set is "measurable" before talking about its volume. Option (1) doesn't help use define probabilities, so it's out. Option (2) violates the second Kolmogorov axiom, so it's out. Option (3) seems like a terrible idea because ZFC fixes so many more problems than it creates. But option (4) seems attractive: if we develop a theory of what is and is not measurable, then we will have well-defined probabilities in this problem! This brings us back to measure theory, and our friend the $\sigma$-algebra.
Why do we need sigma-algebras to define probability spaces? To Xi'an's first point: When you're talking about $\sigma$-algebras, you're asking about measurable sets, so unfortunately any answer must focus on measure theory. I'll try to build up to that gently,
742
Why do we need sigma-algebras to define probability spaces?
The underlying idea (in very practical terms) is simple. Suppose you are a statistician working with some survey. Lets suppose the survey has some questions about age, but only ask the respondent to identify his age in some given intervals, like $[0,18), [18, 25), [25,34), \dots $. Lets forget the other questions. This questionnaire defines an "event space", your $(\Omega,F)$. The sigma algebra $F$ codifies all information which can be obtained from the questionnaire, so, for the age question (and for now we ignore all other questions), it will contain the interval $[18,25)$ but not other intervals like $[20,30)$, since from the information obtained by the questionnaire we cannot answer question like: do the respondents age belong to $[20,30)$ or not? More generally, a set is an event (belongs to $F$) if and only if we can decide if a sample point belongs to that set or not. Now, let us define random variables with values in the second event space, $(\Omega', F')$. As an example, take this to be the real line with the usual (Borel) sigma-algebra. Then, an (un-interesting) function which is not a random variable is $f: $"respondents age is a prime number", coding this as 1 if age is prime, 0 else. No, $f^{-1}(1)$ do not belong to $F$, so $f$ is not a random variable. The reason is simple, we cannot decide from the information in the questionnaire if the respondent's age is prime or not! Now you can make more interesting examples yourself. Why do we require $F$ to be a sigma algebra? Let us say we want to ask two questions of the data, 'is respondent number 3 18 years or older', 'is respondent 3 a female'. Let the questions define two events (sets in $F$) $A$ and $B$, the sets of sample points giving a "yes" answer to that question. Now let us ask the conjunction of the two questions 'is responent 3 a female o18 years or older'. Now that question is represented by the set intersection $A \cap B$. In a similar manner, disjunctions are represented by set union $A \cup B$. Now, requiring closedness for countable intersections and unions lets us ask countable conjunctions or disjunctions. And, negating a question is represented by the complementary set. That gives us a sigma-algebra. I saw this kind of introduction first in the very good book by Peter Whittle "Probability via expectation" (Springer). EDIT Trying to answer whubers question in a comment: "I was a little taken aback at the end, though, when I encountered this asssertion: 'requiring closedness for countable intersections and unions lets us ask countable conjunctions or disjunctions.' This seems to get at the heart of the issue: why would anyone want to construct such an infinitely complicated event? " well, why? Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw $i$", "tails on throw $i$, and a finite number of "and" or "or". So, in this situation, we do not need $\sigma$-algebras, algebras of sets is enough. So, is there any situation, in this context, where $\sigma$-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when $n$, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no $\sigma$-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite $n$ cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need $\sigma$-algebras. But do we really need the strong law of large numbers? According to one answer here, maybe not. In a way, this points to a very big conceptual difference between the strong and the weak law of large numbers: The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with $n$, with numerical bounds for finite $n$, so is more empirically meaningful. So, all practical use of discrete probability could do without $\sigma$-algebras. For the continuous case, I am not so sure.
Why do we need sigma-algebras to define probability spaces?
The underlying idea (in very practical terms) is simple. Suppose you are a statistician working with some survey. Lets suppose the survey has some questions about age, but only ask the respondent to i
Why do we need sigma-algebras to define probability spaces? The underlying idea (in very practical terms) is simple. Suppose you are a statistician working with some survey. Lets suppose the survey has some questions about age, but only ask the respondent to identify his age in some given intervals, like $[0,18), [18, 25), [25,34), \dots $. Lets forget the other questions. This questionnaire defines an "event space", your $(\Omega,F)$. The sigma algebra $F$ codifies all information which can be obtained from the questionnaire, so, for the age question (and for now we ignore all other questions), it will contain the interval $[18,25)$ but not other intervals like $[20,30)$, since from the information obtained by the questionnaire we cannot answer question like: do the respondents age belong to $[20,30)$ or not? More generally, a set is an event (belongs to $F$) if and only if we can decide if a sample point belongs to that set or not. Now, let us define random variables with values in the second event space, $(\Omega', F')$. As an example, take this to be the real line with the usual (Borel) sigma-algebra. Then, an (un-interesting) function which is not a random variable is $f: $"respondents age is a prime number", coding this as 1 if age is prime, 0 else. No, $f^{-1}(1)$ do not belong to $F$, so $f$ is not a random variable. The reason is simple, we cannot decide from the information in the questionnaire if the respondent's age is prime or not! Now you can make more interesting examples yourself. Why do we require $F$ to be a sigma algebra? Let us say we want to ask two questions of the data, 'is respondent number 3 18 years or older', 'is respondent 3 a female'. Let the questions define two events (sets in $F$) $A$ and $B$, the sets of sample points giving a "yes" answer to that question. Now let us ask the conjunction of the two questions 'is responent 3 a female o18 years or older'. Now that question is represented by the set intersection $A \cap B$. In a similar manner, disjunctions are represented by set union $A \cup B$. Now, requiring closedness for countable intersections and unions lets us ask countable conjunctions or disjunctions. And, negating a question is represented by the complementary set. That gives us a sigma-algebra. I saw this kind of introduction first in the very good book by Peter Whittle "Probability via expectation" (Springer). EDIT Trying to answer whubers question in a comment: "I was a little taken aback at the end, though, when I encountered this asssertion: 'requiring closedness for countable intersections and unions lets us ask countable conjunctions or disjunctions.' This seems to get at the heart of the issue: why would anyone want to construct such an infinitely complicated event? " well, why? Restrict ourselves now to discrete probability, let's say, for convenience, coin tossing. Throwing the coin a finite number of times, all events we can describe using the coin can be expressed via events of the type "head on throw $i$", "tails on throw $i$, and a finite number of "and" or "or". So, in this situation, we do not need $\sigma$-algebras, algebras of sets is enough. So, is there any situation, in this context, where $\sigma$-algebras arise? In practice, even if we can only throw the dice a finite number of times, we develop approximations to probabilities via limit theorems when $n$, the number of throws, grows without bound. So have a look at the proof of the central limit theorem for this case, the Laplace-de Moivre theorem. We can prove via approximations using only algebras, no $\sigma$-algebra should be needed. The weak law of large numbers can be proved via the Chebyshev's inequality, and for that we need only compute variance for finite $n$ cases. But, for the strong law of large numbers, the event we prove has probability one can only be expressed via a countably infinite number of "and" and "or"'s, so for the strong law of large numbers we need $\sigma$-algebras. But do we really need the strong law of large numbers? According to one answer here, maybe not. In a way, this points to a very big conceptual difference between the strong and the weak law of large numbers: The strong law is not directly empirically meaningful, since it is about actual convergence, which can never be empirically verified. The weak law, on the other hand, is about the quality of approximation increasing with $n$, with numerical bounds for finite $n$, so is more empirically meaningful. So, all practical use of discrete probability could do without $\sigma$-algebras. For the continuous case, I am not so sure.
Why do we need sigma-algebras to define probability spaces? The underlying idea (in very practical terms) is simple. Suppose you are a statistician working with some survey. Lets suppose the survey has some questions about age, but only ask the respondent to i
743
Why do we need sigma-algebras to define probability spaces?
Why do probabilists need $\boldsymbol{\sigma}$-algebra? The axioms of $\sigma$-algebras are pretty naturally motivated by probability. You want to be able to measure all Venn diagram regions, e.g., $A \cup B$, $(A\cup B)\cap C$. To quote from this memorable answer: The first axiom is that $\oslash,X\in \sigma$. Well you ALWAYS know the probability of nothing happening ($0$) or something happening ($1$). The second axiom is closed under complements. Let me offer a stupid example. Again, consider a coin flip, with $X=\{H,T\}$. Pretend I tell you that the $\sigma$ algebra for this flip is $\{\oslash,X,\{H\}\}$. That is, I know the probability of NOTHING happening, of SOMETHING happening, and of a heads but I DON'T know the probability of a tails. You would rightly call me a moron. Because if you know the probability of a heads, you automatically know the probability of a tails! If you know the probability of something happening, you know the probability of it NOT happening (the complement)! The last axiom is closed under countable unions. Let me give you another stupid example. Consider the roll of a die, or $X=\{1,2,3,4,5,6\}$. What if I were to tell you the $\sigma$ algebra for this is $\{\oslash,X,\{1\},\{2\}\}$. That is, I know the probability of rolling a $1$ or rolling a $2$, but I don't know the probability of rolling a $1$ or a $2$. Again, you would justifiably call me an idiot (I hope the reason is clear). What happens when the sets are not disjoint, and what happens with uncountable unions is a little messier but I hope you can try to think of some examples. Why do you need countable instead of just finite $\boldsymbol{\sigma}$-additivity though? Well, it’s not an entirely clean-cut case, but there are some solid reasons why. Why do probabilists need measures? At this point, you already have all the axioms for a measure. From $\sigma$-additivity, non-negativity, null empty set, and the domain of $\sigma$-algebra. You might as well require $P$ to be a measure. Measure theory is already justified. People bring in Vitali’s set and Banach-Tarski to explain why you need measure theory, but I think that’s misleading. Vitali’s set only goes away for (non-trivial) measures that are translation-invariant, which probability spaces do not require. And Banach-Tarski requires rotation-invariance. Analysis people care about them, but probabilists actually don’t. The raison d’être of measure theory in probability theory is to unify the treatment of discrete and continuous RVs, and moreover, allow for RVs that are mixed and RVs that are simply neither.
Why do we need sigma-algebras to define probability spaces?
Why do probabilists need $\boldsymbol{\sigma}$-algebra? The axioms of $\sigma$-algebras are pretty naturally motivated by probability. You want to be able to measure all Venn diagram regions, e.g., $A
Why do we need sigma-algebras to define probability spaces? Why do probabilists need $\boldsymbol{\sigma}$-algebra? The axioms of $\sigma$-algebras are pretty naturally motivated by probability. You want to be able to measure all Venn diagram regions, e.g., $A \cup B$, $(A\cup B)\cap C$. To quote from this memorable answer: The first axiom is that $\oslash,X\in \sigma$. Well you ALWAYS know the probability of nothing happening ($0$) or something happening ($1$). The second axiom is closed under complements. Let me offer a stupid example. Again, consider a coin flip, with $X=\{H,T\}$. Pretend I tell you that the $\sigma$ algebra for this flip is $\{\oslash,X,\{H\}\}$. That is, I know the probability of NOTHING happening, of SOMETHING happening, and of a heads but I DON'T know the probability of a tails. You would rightly call me a moron. Because if you know the probability of a heads, you automatically know the probability of a tails! If you know the probability of something happening, you know the probability of it NOT happening (the complement)! The last axiom is closed under countable unions. Let me give you another stupid example. Consider the roll of a die, or $X=\{1,2,3,4,5,6\}$. What if I were to tell you the $\sigma$ algebra for this is $\{\oslash,X,\{1\},\{2\}\}$. That is, I know the probability of rolling a $1$ or rolling a $2$, but I don't know the probability of rolling a $1$ or a $2$. Again, you would justifiably call me an idiot (I hope the reason is clear). What happens when the sets are not disjoint, and what happens with uncountable unions is a little messier but I hope you can try to think of some examples. Why do you need countable instead of just finite $\boldsymbol{\sigma}$-additivity though? Well, it’s not an entirely clean-cut case, but there are some solid reasons why. Why do probabilists need measures? At this point, you already have all the axioms for a measure. From $\sigma$-additivity, non-negativity, null empty set, and the domain of $\sigma$-algebra. You might as well require $P$ to be a measure. Measure theory is already justified. People bring in Vitali’s set and Banach-Tarski to explain why you need measure theory, but I think that’s misleading. Vitali’s set only goes away for (non-trivial) measures that are translation-invariant, which probability spaces do not require. And Banach-Tarski requires rotation-invariance. Analysis people care about them, but probabilists actually don’t. The raison d’être of measure theory in probability theory is to unify the treatment of discrete and continuous RVs, and moreover, allow for RVs that are mixed and RVs that are simply neither.
Why do we need sigma-algebras to define probability spaces? Why do probabilists need $\boldsymbol{\sigma}$-algebra? The axioms of $\sigma$-algebras are pretty naturally motivated by probability. You want to be able to measure all Venn diagram regions, e.g., $A
744
Why do we need sigma-algebras to define probability spaces?
I've always understood the whole story like this: We start off with a space, such as the real line $\mathbb{R}$. We would like to apply our measure to subsets of this space, such as by applying the Lebesgue measure, which measures length. An example would be to measure the length of the subset $[0, 0.5] \cup [0.75, 1]$. For this example the answer is simply $0.5 + 0.25 = 0.75$, which we can obtain fairly easily. We start to wonder whether we can apply the Lebesgue measure to all subsets of the real line. Unfortunately it doesn't work. There are these pathological sets that simply break down mathematics. If you apply the Lebesgue measure to these sets, you will get inconsistent results. An example of one of these pathological sets, also known as non-measurable sets because they literally can't be measured, are the Vitali Sets. In order to avoid these crazy sets, we define the measure to only work for a smaller group of subsets, called measurable sets. These are the sets that behave consistently when we apply measures to them. In order to allow us to perform operations with these sets, such as by combining them with unions or taking their complements, we require these measurable sets to form a sigma-algebra amongst themselves. By forming a sigma-algebra, we have formed a sort of safe haven for our measures to operate within, while also allowing us to make reasonable manipulations to get what we want, such as taking unions and complements. This is why we need a sigma-algebra, so that we could draw out a region for the measure to work within, while avoiding non-measurable sets. Notice that if it weren't for these pathological subsets, I can easily define the measure to operate within the power set of the topological space. However, the power set contains all sorts of non-measurable sets, and that's why we have to pick out the measurable ones and make them form a sigma-algebra among themselves. As you can see, since sigma-algebras are used to avoid non-measurable sets, sets that are finite in size don't actually need a sigma algebra. Let's say you're dealing with a sample space $\Omega = \{1, 2, 3\}$ (this could be all the possible result of a random number generated by a computer). You can see it's pretty much impossible to come up with non-measurable sets with such a sample space. The measure (in this case a probability measure) is well-defined for whatever subset of $\Omega$ you can think of. But we do need to define sigma-algebras for larger sample spaces, such as the real line, so that we can avoid pathological subsets that break down our measures. In order to achieve consistency in the theoretical framework of probability, we require that finite sample spaces also form sigma algebras, where only in which is the probability measure defined. Sigma-algebras in finite sample spaces is a technicality, while sigma-algebras in larger sample spaces such as the real line is a necessity. One common sigma-algebra we use for the real line is the Borel sigma-algebra. It is formed by all possible open sets, and then taking the complements and unions until the three conditions of a sigma-algebra is achieved. Say if you're constructing the Borel sigma-algebra for $\mathbb{R}[0, 1]$, you do that by listing all possible open sets, such as $(0.5, 0.7), (0.03, 0.05), (0.2, 0.7), ...$ and so on, and as you can imagine there are infinitely many possibilities you can list, and then you take the complements and unions until a sigma-algebra is generated. As you can imagine this sigma algebra is a BEAST. It is unimaginably huge. But the lovely thing about it is that it excludes all the crazy pathological sets that broke down mathematics. Those crazy sets are not in the Borel sigma-algebra. Also, this set is comprehensive enough to include almost every subset that we need. It is hard to think of a subset that is not contained within the Borel sigma-algebra. And so that is the story of why we need sigma-algebras and Borel sigma-algebras are a common way to implement this idea.
Why do we need sigma-algebras to define probability spaces?
I've always understood the whole story like this: We start off with a space, such as the real line $\mathbb{R}$. We would like to apply our measure to subsets of this space, such as by applying the Le
Why do we need sigma-algebras to define probability spaces? I've always understood the whole story like this: We start off with a space, such as the real line $\mathbb{R}$. We would like to apply our measure to subsets of this space, such as by applying the Lebesgue measure, which measures length. An example would be to measure the length of the subset $[0, 0.5] \cup [0.75, 1]$. For this example the answer is simply $0.5 + 0.25 = 0.75$, which we can obtain fairly easily. We start to wonder whether we can apply the Lebesgue measure to all subsets of the real line. Unfortunately it doesn't work. There are these pathological sets that simply break down mathematics. If you apply the Lebesgue measure to these sets, you will get inconsistent results. An example of one of these pathological sets, also known as non-measurable sets because they literally can't be measured, are the Vitali Sets. In order to avoid these crazy sets, we define the measure to only work for a smaller group of subsets, called measurable sets. These are the sets that behave consistently when we apply measures to them. In order to allow us to perform operations with these sets, such as by combining them with unions or taking their complements, we require these measurable sets to form a sigma-algebra amongst themselves. By forming a sigma-algebra, we have formed a sort of safe haven for our measures to operate within, while also allowing us to make reasonable manipulations to get what we want, such as taking unions and complements. This is why we need a sigma-algebra, so that we could draw out a region for the measure to work within, while avoiding non-measurable sets. Notice that if it weren't for these pathological subsets, I can easily define the measure to operate within the power set of the topological space. However, the power set contains all sorts of non-measurable sets, and that's why we have to pick out the measurable ones and make them form a sigma-algebra among themselves. As you can see, since sigma-algebras are used to avoid non-measurable sets, sets that are finite in size don't actually need a sigma algebra. Let's say you're dealing with a sample space $\Omega = \{1, 2, 3\}$ (this could be all the possible result of a random number generated by a computer). You can see it's pretty much impossible to come up with non-measurable sets with such a sample space. The measure (in this case a probability measure) is well-defined for whatever subset of $\Omega$ you can think of. But we do need to define sigma-algebras for larger sample spaces, such as the real line, so that we can avoid pathological subsets that break down our measures. In order to achieve consistency in the theoretical framework of probability, we require that finite sample spaces also form sigma algebras, where only in which is the probability measure defined. Sigma-algebras in finite sample spaces is a technicality, while sigma-algebras in larger sample spaces such as the real line is a necessity. One common sigma-algebra we use for the real line is the Borel sigma-algebra. It is formed by all possible open sets, and then taking the complements and unions until the three conditions of a sigma-algebra is achieved. Say if you're constructing the Borel sigma-algebra for $\mathbb{R}[0, 1]$, you do that by listing all possible open sets, such as $(0.5, 0.7), (0.03, 0.05), (0.2, 0.7), ...$ and so on, and as you can imagine there are infinitely many possibilities you can list, and then you take the complements and unions until a sigma-algebra is generated. As you can imagine this sigma algebra is a BEAST. It is unimaginably huge. But the lovely thing about it is that it excludes all the crazy pathological sets that broke down mathematics. Those crazy sets are not in the Borel sigma-algebra. Also, this set is comprehensive enough to include almost every subset that we need. It is hard to think of a subset that is not contained within the Borel sigma-algebra. And so that is the story of why we need sigma-algebras and Borel sigma-algebras are a common way to implement this idea.
Why do we need sigma-algebras to define probability spaces? I've always understood the whole story like this: We start off with a space, such as the real line $\mathbb{R}$. We would like to apply our measure to subsets of this space, such as by applying the Le
745
How to summarize data by group in R? [closed]
Here is the plyr one line variant using ddply: dt <- data.frame(age=rchisq(20,10),group=sample(1:2,20,rep=T)) ddply(dt,~group,summarise,mean=mean(age),sd=sd(age)) Here is another one line variant using new package data.table. dtf <- data.frame(age=rchisq(100000,10),group=factor(sample(1:10,100000,rep=T))) dt <- data.table(dtf) dt[,list(mean=mean(age),sd=sd(age)),by=group] This one is faster, though this is noticeable only on table with 100k rows. Timings on my Macbook Pro with 2.53 Ghz Core 2 Duo processor and R 2.11.1: > system.time(aa <- ddply(dtf,~group,summarise,mean=mean(age),sd=sd(age))) utilisateur système écoulé 0.513 0.180 0.692 > system.time(aa <- dt[,list(mean=mean(age),sd=sd(age)),by=group]) utilisateur système écoulé 0.087 0.018 0.103 Further savings are possible if we use setkey: > setkey(dt,group) > system.time(dt[,list(mean=mean(age),sd=sd(age)),by=group]) utilisateur système écoulé 0.040 0.007 0.048
How to summarize data by group in R? [closed]
Here is the plyr one line variant using ddply: dt <- data.frame(age=rchisq(20,10),group=sample(1:2,20,rep=T)) ddply(dt,~group,summarise,mean=mean(age),sd=sd(age)) Here is another one line variant usi
How to summarize data by group in R? [closed] Here is the plyr one line variant using ddply: dt <- data.frame(age=rchisq(20,10),group=sample(1:2,20,rep=T)) ddply(dt,~group,summarise,mean=mean(age),sd=sd(age)) Here is another one line variant using new package data.table. dtf <- data.frame(age=rchisq(100000,10),group=factor(sample(1:10,100000,rep=T))) dt <- data.table(dtf) dt[,list(mean=mean(age),sd=sd(age)),by=group] This one is faster, though this is noticeable only on table with 100k rows. Timings on my Macbook Pro with 2.53 Ghz Core 2 Duo processor and R 2.11.1: > system.time(aa <- ddply(dtf,~group,summarise,mean=mean(age),sd=sd(age))) utilisateur système écoulé 0.513 0.180 0.692 > system.time(aa <- dt[,list(mean=mean(age),sd=sd(age)),by=group]) utilisateur système écoulé 0.087 0.018 0.103 Further savings are possible if we use setkey: > setkey(dt,group) > system.time(dt[,list(mean=mean(age),sd=sd(age)),by=group]) utilisateur système écoulé 0.040 0.007 0.048
How to summarize data by group in R? [closed] Here is the plyr one line variant using ddply: dt <- data.frame(age=rchisq(20,10),group=sample(1:2,20,rep=T)) ddply(dt,~group,summarise,mean=mean(age),sd=sd(age)) Here is another one line variant usi
746
How to summarize data by group in R? [closed]
One possibility is to use the aggregate function. For instance, aggregate(data$age, by=list(data$group), FUN=mean)[2] gives you the second column of the desired result.
How to summarize data by group in R? [closed]
One possibility is to use the aggregate function. For instance, aggregate(data$age, by=list(data$group), FUN=mean)[2] gives you the second column of the desired result.
How to summarize data by group in R? [closed] One possibility is to use the aggregate function. For instance, aggregate(data$age, by=list(data$group), FUN=mean)[2] gives you the second column of the desired result.
How to summarize data by group in R? [closed] One possibility is to use the aggregate function. For instance, aggregate(data$age, by=list(data$group), FUN=mean)[2] gives you the second column of the desired result.
747
How to summarize data by group in R? [closed]
Since you are manipulating a data frame, the dplyr package is probably the faster way to do it. library(dplyr) dt <- data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) grp <- group_by(dt, group) summarise(grp, mean=mean(age), sd=sd(age)) or equivalently, using the dplyr/magrittr pipe operator: library(dplyr) dt <- data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) group_by(dt, group) %>% summarise(mean=mean(age), sd=sd(age)) EDIT full use of pipe operator: library(dplyr) data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) %>% group_by(group) %>% summarise(mean=mean(age), sd=sd(age))
How to summarize data by group in R? [closed]
Since you are manipulating a data frame, the dplyr package is probably the faster way to do it. library(dplyr) dt <- data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) grp <- group_by(dt, grou
How to summarize data by group in R? [closed] Since you are manipulating a data frame, the dplyr package is probably the faster way to do it. library(dplyr) dt <- data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) grp <- group_by(dt, group) summarise(grp, mean=mean(age), sd=sd(age)) or equivalently, using the dplyr/magrittr pipe operator: library(dplyr) dt <- data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) group_by(dt, group) %>% summarise(mean=mean(age), sd=sd(age)) EDIT full use of pipe operator: library(dplyr) data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) %>% group_by(group) %>% summarise(mean=mean(age), sd=sd(age))
How to summarize data by group in R? [closed] Since you are manipulating a data frame, the dplyr package is probably the faster way to do it. library(dplyr) dt <- data.frame(age=rchisq(20,10), group=sample(1:2,20, rep=T)) grp <- group_by(dt, grou
748
How to summarize data by group in R? [closed]
In addition to existing suggestions, you might want to check out the describe.by function in the psych package. It provides a number of descriptive statistics including the mean and standard deviation based on a grouping variable.
How to summarize data by group in R? [closed]
In addition to existing suggestions, you might want to check out the describe.by function in the psych package. It provides a number of descriptive statistics including the mean and standard deviation
How to summarize data by group in R? [closed] In addition to existing suggestions, you might want to check out the describe.by function in the psych package. It provides a number of descriptive statistics including the mean and standard deviation based on a grouping variable.
How to summarize data by group in R? [closed] In addition to existing suggestions, you might want to check out the describe.by function in the psych package. It provides a number of descriptive statistics including the mean and standard deviation
749
How to summarize data by group in R? [closed]
Great, thanks bquast for adding the dplyr solution! Turns out that then, dplyr and data.table are very close: library(plyr) library(dplyr) library(data.table) library(rbenchmark) dtf <- data.frame(age=rchisq(100000,10),group=factor(sample(1:10,100000,rep=T))) dt <- data.table(dtf) setkey(dt,group) a<-benchmark(ddply(dtf,~group,plyr:::summarise,mean=mean(age),sd=sd(age)), dt[,list(mean=mean(age),sd=sd(age)),by=group], group_by(dt, group) %>% summarise(mean=mean(age),sd=sd(age) ), group_by(dtf, group) %>% summarise(mean=mean(age),sd=sd(age) ) ) a[, c(1,3,4)] data.table is still the fastest, by followed very closely by dplyr(), which interestingly seems faster on the data.frame than the data.table: test elapsed relative 1 ddply(dtf, ~group, plyr:::summarise, mean = mean(age), sd = sd(age)) 1.689 4.867 2 dt[, list(mean = mean(age), sd = sd(age)), by = group] 0.347 1.000 4 group_by(dtf, group) %>% summarise(mean = mean(age), sd = sd(age)) 0.369 1.063 3 group_by(dt, group) %>% summarise(mean = mean(age), sd = sd(age)) 0.580 1.671
How to summarize data by group in R? [closed]
Great, thanks bquast for adding the dplyr solution! Turns out that then, dplyr and data.table are very close: library(plyr) library(dplyr) library(data.table) library(rbenchmark) dtf <- data.frame(ag
How to summarize data by group in R? [closed] Great, thanks bquast for adding the dplyr solution! Turns out that then, dplyr and data.table are very close: library(plyr) library(dplyr) library(data.table) library(rbenchmark) dtf <- data.frame(age=rchisq(100000,10),group=factor(sample(1:10,100000,rep=T))) dt <- data.table(dtf) setkey(dt,group) a<-benchmark(ddply(dtf,~group,plyr:::summarise,mean=mean(age),sd=sd(age)), dt[,list(mean=mean(age),sd=sd(age)),by=group], group_by(dt, group) %>% summarise(mean=mean(age),sd=sd(age) ), group_by(dtf, group) %>% summarise(mean=mean(age),sd=sd(age) ) ) a[, c(1,3,4)] data.table is still the fastest, by followed very closely by dplyr(), which interestingly seems faster on the data.frame than the data.table: test elapsed relative 1 ddply(dtf, ~group, plyr:::summarise, mean = mean(age), sd = sd(age)) 1.689 4.867 2 dt[, list(mean = mean(age), sd = sd(age)), by = group] 0.347 1.000 4 group_by(dtf, group) %>% summarise(mean = mean(age), sd = sd(age)) 0.369 1.063 3 group_by(dt, group) %>% summarise(mean = mean(age), sd = sd(age)) 0.580 1.671
How to summarize data by group in R? [closed] Great, thanks bquast for adding the dplyr solution! Turns out that then, dplyr and data.table are very close: library(plyr) library(dplyr) library(data.table) library(rbenchmark) dtf <- data.frame(ag
750
How to summarize data by group in R? [closed]
I have found the function summaryBy in the doBy package to be the most convenient for this: library(doBy) age = c(23.0883, 25.8344, 29.4648, 32.7858, 33.6372, 34.935, 35.2115, 35.2115, 5.2115, 36.7803) group = c(1, 1, 1, 2, 1, 1, 2, 2, 2, 1) dframe = data.frame(age=age, group=group) summaryBy(age~group, data=dframe, FUN=c(mean, sd)) # # group age.mean age.sd # 1 1 30.62333 5.415439 # 2 2 27.10507 14.640441
How to summarize data by group in R? [closed]
I have found the function summaryBy in the doBy package to be the most convenient for this: library(doBy) age = c(23.0883, 25.8344, 29.4648, 32.7858, 33.6372, 34.935, 35.2115, 35.211
How to summarize data by group in R? [closed] I have found the function summaryBy in the doBy package to be the most convenient for this: library(doBy) age = c(23.0883, 25.8344, 29.4648, 32.7858, 33.6372, 34.935, 35.2115, 35.2115, 5.2115, 36.7803) group = c(1, 1, 1, 2, 1, 1, 2, 2, 2, 1) dframe = data.frame(age=age, group=group) summaryBy(age~group, data=dframe, FUN=c(mean, sd)) # # group age.mean age.sd # 1 1 30.62333 5.415439 # 2 2 27.10507 14.640441
How to summarize data by group in R? [closed] I have found the function summaryBy in the doBy package to be the most convenient for this: library(doBy) age = c(23.0883, 25.8344, 29.4648, 32.7858, 33.6372, 34.935, 35.2115, 35.211
751
How to summarize data by group in R? [closed]
Edited: According to chl's suggestions The function you are looking for is called "tapply" which applies a function per group specified by a factor. # create some artificial data set.seed(42) groups <- 5 agedat <- c() groupdat <- c() for(group in 1:groups){ agedat <- c(agedat,rnorm(100,mean=0 + group,1/group)) groupdat <- c(groupdat,rep(group,100)) } dat <- data.frame("age"=agedat,"group"=factor(groupdat)) # calculate mean and stdev age per group res <- rbind.data.frame(group=1:5, with(dat, tapply(age, group, function(x) c(mean(x), sd(x))))) names(res) <- paste("group",1:5) row.names(res)[2:3] <- c("mean","sd") I really suggest to work through a basic R tutorial explaining all commonly used datastructures and methods. Otherwise you will get stuck every inch during programming. See this question for a collection of free available resources.
How to summarize data by group in R? [closed]
Edited: According to chl's suggestions The function you are looking for is called "tapply" which applies a function per group specified by a factor. # create some artificial data set.seed(42) groups <
How to summarize data by group in R? [closed] Edited: According to chl's suggestions The function you are looking for is called "tapply" which applies a function per group specified by a factor. # create some artificial data set.seed(42) groups <- 5 agedat <- c() groupdat <- c() for(group in 1:groups){ agedat <- c(agedat,rnorm(100,mean=0 + group,1/group)) groupdat <- c(groupdat,rep(group,100)) } dat <- data.frame("age"=agedat,"group"=factor(groupdat)) # calculate mean and stdev age per group res <- rbind.data.frame(group=1:5, with(dat, tapply(age, group, function(x) c(mean(x), sd(x))))) names(res) <- paste("group",1:5) row.names(res)[2:3] <- c("mean","sd") I really suggest to work through a basic R tutorial explaining all commonly used datastructures and methods. Otherwise you will get stuck every inch during programming. See this question for a collection of free available resources.
How to summarize data by group in R? [closed] Edited: According to chl's suggestions The function you are looking for is called "tapply" which applies a function per group specified by a factor. # create some artificial data set.seed(42) groups <
752
How to summarize data by group in R? [closed]
Use the sqldf package. This allows you now to use SQL to summarize the data. Once you load it you can write something like - sqldf(' select group,avg(age) from data group by group ')
How to summarize data by group in R? [closed]
Use the sqldf package. This allows you now to use SQL to summarize the data. Once you load it you can write something like - sqldf(' select group,avg(age) from data group by group ')
How to summarize data by group in R? [closed] Use the sqldf package. This allows you now to use SQL to summarize the data. Once you load it you can write something like - sqldf(' select group,avg(age) from data group by group ')
How to summarize data by group in R? [closed] Use the sqldf package. This allows you now to use SQL to summarize the data. Once you load it you can write something like - sqldf(' select group,avg(age) from data group by group ')
753
How to summarize data by group in R? [closed]
Here is an example with the function aggregates() I did myself some time ago: # simulates data set.seed(666) ( dat <- data.frame(group=gl(3,6), level=factor(rep(c("A","B","C"), 6)), y=round(rnorm(18,10),1)) ) > dat group level y 1 1 A 10.8 2 1 B 12.0 3 1 C 9.6 4 1 A 12.0 5 1 B 7.8 6 1 C 10.8 7 2 A 8.7 8 2 B 9.2 9 2 C 8.2 10 2 A 10.0 11 2 B 12.2 12 2 C 8.2 13 3 A 10.9 14 3 B 8.3 15 3 C 10.1 16 3 A 9.9 17 3 B 10.9 18 3 C 10.3 # aggregates() function aggregates <- function(formula, data=NULL, FUNS){ if(class(FUNS)=="list"){ f <- function(x) sapply(FUNS, function(fun) fun(x)) }else{f <- FUNS} temp <- aggregate(formula, data, f) out <- data.frame(temp[,-ncol(temp)], temp[,ncol(temp)]) colnames(out)[1] <- colnames(temp)[1] return(out) } # example FUNS <- function(x) c(mean=round(mean(x),0), sd=round(sd(x), 0)) ( ag <- aggregates(y~group:level, data=dat, FUNS=FUNS) ) It gives the following result: > ag group level mean sd 1 1 A 11 1 2 2 A 9 1 3 3 A 10 1 4 1 B 10 3 5 2 B 11 2 6 3 B 10 2 7 1 C 10 1 8 2 C 8 0 9 3 C 10 0 Maybe you can get the same result starting from the R function split(): > with(dat, sapply( split(y, group:level), FUNS ) ) 1:A 1:B 1:C 2:A 2:B 2:C 3:A 3:B 3:C mean 11 10 10 9 11 8 10 10 10 sd 1 3 1 1 2 0 1 2 0 Let me come back to the output of the aggregates function. You can transform it in a beautiful table using reshape(), xtabs() and ftable(): rag <- reshape(ag, varying=list(3:4), direction="long", v.names="y") rag$time <- factor(rag$time) ft <- ftable(xtabs(y~group+level+time, data=rag)) attributes(ft)$col.vars <- list(c("mean","sd")) This gives: > ft mean sd group level 1 A 11 1 B 10 3 C 10 1 2 A 9 1 B 11 2 C 8 0 3 A 10 1 B 10 2 C 10 0 Beautiful, isn't it? You can export this table to a pdf with the textplot() function of the gplots package. See here for others' solutions.
How to summarize data by group in R? [closed]
Here is an example with the function aggregates() I did myself some time ago: # simulates data set.seed(666) ( dat <- data.frame(group=gl(3,6), level=factor(rep(c("A","B","C"), 6)),
How to summarize data by group in R? [closed] Here is an example with the function aggregates() I did myself some time ago: # simulates data set.seed(666) ( dat <- data.frame(group=gl(3,6), level=factor(rep(c("A","B","C"), 6)), y=round(rnorm(18,10),1)) ) > dat group level y 1 1 A 10.8 2 1 B 12.0 3 1 C 9.6 4 1 A 12.0 5 1 B 7.8 6 1 C 10.8 7 2 A 8.7 8 2 B 9.2 9 2 C 8.2 10 2 A 10.0 11 2 B 12.2 12 2 C 8.2 13 3 A 10.9 14 3 B 8.3 15 3 C 10.1 16 3 A 9.9 17 3 B 10.9 18 3 C 10.3 # aggregates() function aggregates <- function(formula, data=NULL, FUNS){ if(class(FUNS)=="list"){ f <- function(x) sapply(FUNS, function(fun) fun(x)) }else{f <- FUNS} temp <- aggregate(formula, data, f) out <- data.frame(temp[,-ncol(temp)], temp[,ncol(temp)]) colnames(out)[1] <- colnames(temp)[1] return(out) } # example FUNS <- function(x) c(mean=round(mean(x),0), sd=round(sd(x), 0)) ( ag <- aggregates(y~group:level, data=dat, FUNS=FUNS) ) It gives the following result: > ag group level mean sd 1 1 A 11 1 2 2 A 9 1 3 3 A 10 1 4 1 B 10 3 5 2 B 11 2 6 3 B 10 2 7 1 C 10 1 8 2 C 8 0 9 3 C 10 0 Maybe you can get the same result starting from the R function split(): > with(dat, sapply( split(y, group:level), FUNS ) ) 1:A 1:B 1:C 2:A 2:B 2:C 3:A 3:B 3:C mean 11 10 10 9 11 8 10 10 10 sd 1 3 1 1 2 0 1 2 0 Let me come back to the output of the aggregates function. You can transform it in a beautiful table using reshape(), xtabs() and ftable(): rag <- reshape(ag, varying=list(3:4), direction="long", v.names="y") rag$time <- factor(rag$time) ft <- ftable(xtabs(y~group+level+time, data=rag)) attributes(ft)$col.vars <- list(c("mean","sd")) This gives: > ft mean sd group level 1 A 11 1 B 10 3 C 10 1 2 A 9 1 B 11 2 C 8 0 3 A 10 1 B 10 2 C 10 0 Beautiful, isn't it? You can export this table to a pdf with the textplot() function of the gplots package. See here for others' solutions.
How to summarize data by group in R? [closed] Here is an example with the function aggregates() I did myself some time ago: # simulates data set.seed(666) ( dat <- data.frame(group=gl(3,6), level=factor(rep(c("A","B","C"), 6)),
754
How do I get the number of rows of a data.frame in R? [closed]
dataset will be a data frame. As I don't have forR.csv, I'll make up a small data frame for illustration: set.seed(1) dataset <- data.frame(A = sample(c(NA, 1:100), 1000, rep = TRUE), B = rnorm(1000)) > head(dataset) A B 1 26 0.07730312 2 37 -0.29686864 3 57 -1.18324224 4 91 0.01129269 5 20 0.99160104 6 90 1.59396745 To get the number of cases, count the number of rows using nrow() or NROW(): > nrow(dataset) [1] 1000 > NROW(dataset) [1] 1000 To count the data after omitting the NA, use the same tools, but wrap dataset in na.omit(): > NROW(na.omit(dataset)) [1] 993 The difference between NROW() and NCOL() and their lowercase variants (ncol() and nrow()) is that the lowercase versions will only work for objects that have dimensions (arrays, matrices, data frames). The uppercase versions will work with vectors, which are treated as if they were a 1 column matrix, and are robust if you end up subsetting your data such that R drops an empty dimension. Alternatively, use complete.cases() and sum it (complete.cases() returns a logical vector [TRUE or FALSE] indicating if any observations are NA for any rows. > sum(complete.cases(dataset)) [1] 993
How do I get the number of rows of a data.frame in R? [closed]
dataset will be a data frame. As I don't have forR.csv, I'll make up a small data frame for illustration: set.seed(1) dataset <- data.frame(A = sample(c(NA, 1:100), 1000, rep = TRUE),
How do I get the number of rows of a data.frame in R? [closed] dataset will be a data frame. As I don't have forR.csv, I'll make up a small data frame for illustration: set.seed(1) dataset <- data.frame(A = sample(c(NA, 1:100), 1000, rep = TRUE), B = rnorm(1000)) > head(dataset) A B 1 26 0.07730312 2 37 -0.29686864 3 57 -1.18324224 4 91 0.01129269 5 20 0.99160104 6 90 1.59396745 To get the number of cases, count the number of rows using nrow() or NROW(): > nrow(dataset) [1] 1000 > NROW(dataset) [1] 1000 To count the data after omitting the NA, use the same tools, but wrap dataset in na.omit(): > NROW(na.omit(dataset)) [1] 993 The difference between NROW() and NCOL() and their lowercase variants (ncol() and nrow()) is that the lowercase versions will only work for objects that have dimensions (arrays, matrices, data frames). The uppercase versions will work with vectors, which are treated as if they were a 1 column matrix, and are robust if you end up subsetting your data such that R drops an empty dimension. Alternatively, use complete.cases() and sum it (complete.cases() returns a logical vector [TRUE or FALSE] indicating if any observations are NA for any rows. > sum(complete.cases(dataset)) [1] 993
How do I get the number of rows of a data.frame in R? [closed] dataset will be a data frame. As I don't have forR.csv, I'll make up a small data frame for illustration: set.seed(1) dataset <- data.frame(A = sample(c(NA, 1:100), 1000, rep = TRUE),
755
How do I get the number of rows of a data.frame in R? [closed]
Briefly: Run dim(dataset) to retrieve both n and k, you can also use nrow(df) and ncol(df) (and even NROW(df) and NCOL(df) -- variants are needed for other types too). If you transform e.g. via dataset <- na.omit(dataset), then the cases are gone and are not counted. But if you do e.g. summary(dataset) the NA cases are accounted for.
How do I get the number of rows of a data.frame in R? [closed]
Briefly: Run dim(dataset) to retrieve both n and k, you can also use nrow(df) and ncol(df) (and even NROW(df) and NCOL(df) -- variants are needed for other types too). If you transform e.g. via datas
How do I get the number of rows of a data.frame in R? [closed] Briefly: Run dim(dataset) to retrieve both n and k, you can also use nrow(df) and ncol(df) (and even NROW(df) and NCOL(df) -- variants are needed for other types too). If you transform e.g. via dataset <- na.omit(dataset), then the cases are gone and are not counted. But if you do e.g. summary(dataset) the NA cases are accounted for.
How do I get the number of rows of a data.frame in R? [closed] Briefly: Run dim(dataset) to retrieve both n and k, you can also use nrow(df) and ncol(df) (and even NROW(df) and NCOL(df) -- variants are needed for other types too). If you transform e.g. via datas
756
What is the influence of C in SVMs with linear kernel?
The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable.
What is the influence of C in SVMs with linear kernel?
The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyp
What is the influence of C in SVMs with linear kernel? The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyperplane does a better job of getting all the training points classified correctly. Conversely, a very small value of C will cause the optimizer to look for a larger-margin separating hyperplane, even if that hyperplane misclassifies more points. For very tiny values of C, you should get misclassified examples, often even if your training data is linearly separable.
What is the influence of C in SVMs with linear kernel? The C parameter tells the SVM optimization how much you want to avoid misclassifying each training example. For large values of C, the optimization will choose a smaller-margin hyperplane if that hyp
757
What is the influence of C in SVMs with linear kernel?
In a SVM you are searching for two things: a hyperplane with the largest minimum margin, and a hyperplane that correctly separates as many instances as possible. The problem is that you will not always be able to get both things. The c parameter determines how great your desire is for the latter. I have drawn a small example below to illustrate this. To the left you have a low c which gives you a pretty large minimum margin (purple). However, this requires that we neglect the blue circle outlier that we have failed to classify correct. On the right you have a high c. Now you will not neglect the outlier and thus end up with a much smaller margin. So which of these classifiers are the best? That depends on what the future data you will predict looks like, and most often you don't know that of course. If the future data looks like this: then the classifier learned using a large c value is best. On the other hand, if the future data looks like this: then the classifier learned using a low c value is best. Depending on your data set, changing c may or may not produce a different hyperplane. If it does produce a different hyperplane, that does not imply that your classifier will output different classes for the particular data you have used it to classify. Weka is a good tool for visualizing data and playing around with different settings for an SVM. It may help you get a better idea of how your data look and why changing the c value does not change the classification error. In general, having few training instances and many attributes make it easier to make a linear separation of the data. Also that fact that you are evaluating on your training data and not new unseen data makes separation easier. What kind of data are you trying to learn a model from? How much data? Can we see it?
What is the influence of C in SVMs with linear kernel?
In a SVM you are searching for two things: a hyperplane with the largest minimum margin, and a hyperplane that correctly separates as many instances as possible. The problem is that you will not alway
What is the influence of C in SVMs with linear kernel? In a SVM you are searching for two things: a hyperplane with the largest minimum margin, and a hyperplane that correctly separates as many instances as possible. The problem is that you will not always be able to get both things. The c parameter determines how great your desire is for the latter. I have drawn a small example below to illustrate this. To the left you have a low c which gives you a pretty large minimum margin (purple). However, this requires that we neglect the blue circle outlier that we have failed to classify correct. On the right you have a high c. Now you will not neglect the outlier and thus end up with a much smaller margin. So which of these classifiers are the best? That depends on what the future data you will predict looks like, and most often you don't know that of course. If the future data looks like this: then the classifier learned using a large c value is best. On the other hand, if the future data looks like this: then the classifier learned using a low c value is best. Depending on your data set, changing c may or may not produce a different hyperplane. If it does produce a different hyperplane, that does not imply that your classifier will output different classes for the particular data you have used it to classify. Weka is a good tool for visualizing data and playing around with different settings for an SVM. It may help you get a better idea of how your data look and why changing the c value does not change the classification error. In general, having few training instances and many attributes make it easier to make a linear separation of the data. Also that fact that you are evaluating on your training data and not new unseen data makes separation easier. What kind of data are you trying to learn a model from? How much data? Can we see it?
What is the influence of C in SVMs with linear kernel? In a SVM you are searching for two things: a hyperplane with the largest minimum margin, and a hyperplane that correctly separates as many instances as possible. The problem is that you will not alway
758
What is the influence of C in SVMs with linear kernel?
C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge parameter in ridge regression (in fact in practice there is little difference in performance or theory between linear SVMs and ridge regression, so I generally use the latter - or kernel ridge regression if there are more attributes than observations). Tuning C correctly is a vital step in best practice in the use of SVMs, as structural risk minimisation (the key principle behind the basic approach) is party implemented via the tuning of C. The parameter C enforces an upper bound on the norm of the weights, which means that there is a nested set of hypothesis classes indexed by C. As we increase C, we increase the complexity of the hypothesis class (if we increase C slightly, we can still form all of the linear models that we could before and also some that we couldn't before we increased the upper bound on the allowable norm of the weights). So as well as implementing SRM via maximum margin classification, it is also implemented by the limiting the complexity of the hypothesis class via controlling C. Sadly the theory for determining how to set C is not very well developed at the moment, so most people tend to use cross-validation (if they do anything).
What is the influence of C in SVMs with linear kernel?
C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge para
What is the influence of C in SVMs with linear kernel? C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge parameter in ridge regression (in fact in practice there is little difference in performance or theory between linear SVMs and ridge regression, so I generally use the latter - or kernel ridge regression if there are more attributes than observations). Tuning C correctly is a vital step in best practice in the use of SVMs, as structural risk minimisation (the key principle behind the basic approach) is party implemented via the tuning of C. The parameter C enforces an upper bound on the norm of the weights, which means that there is a nested set of hypothesis classes indexed by C. As we increase C, we increase the complexity of the hypothesis class (if we increase C slightly, we can still form all of the linear models that we could before and also some that we couldn't before we increased the upper bound on the allowable norm of the weights). So as well as implementing SRM via maximum margin classification, it is also implemented by the limiting the complexity of the hypothesis class via controlling C. Sadly the theory for determining how to set C is not very well developed at the moment, so most people tend to use cross-validation (if they do anything).
What is the influence of C in SVMs with linear kernel? C is essentially a regularisation parameter, which controls the trade-off between achieving a low error on the training data and minimising the norm of the weights. It is analageous to the ridge para
759
What is the influence of C in SVMs with linear kernel?
C is a regularization parameter that controls the trade off between the achieving a low training error and a low testing error that is the ability to generalize your classifier to unseen data. Consider the objective function of a linear SVM : min |w|^2+C∑ξ. If your C is too large the optimization algorithm will try to reduce |w| as much as possible leading to a hyperplane which tries to classify each training example correctly. Doing this will lead to loss in generalization properties of the classifier. On the other hand if your C is too small then you give your objective function a certain freedom to increase |w| a lot, which will lead to large training error. The pictures below might help you visualize this.
What is the influence of C in SVMs with linear kernel?
C is a regularization parameter that controls the trade off between the achieving a low training error and a low testing error that is the ability to generalize your classifier to unseen data. Consid
What is the influence of C in SVMs with linear kernel? C is a regularization parameter that controls the trade off between the achieving a low training error and a low testing error that is the ability to generalize your classifier to unseen data. Consider the objective function of a linear SVM : min |w|^2+C∑ξ. If your C is too large the optimization algorithm will try to reduce |w| as much as possible leading to a hyperplane which tries to classify each training example correctly. Doing this will lead to loss in generalization properties of the classifier. On the other hand if your C is too small then you give your objective function a certain freedom to increase |w| a lot, which will lead to large training error. The pictures below might help you visualize this.
What is the influence of C in SVMs with linear kernel? C is a regularization parameter that controls the trade off between the achieving a low training error and a low testing error that is the ability to generalize your classifier to unseen data. Consid
760
What is the influence of C in SVMs with linear kernel?
The answers above are excellent. After carefully reading your questions, I found there are 2 important facts we might overlooked. You are using linear kernel Your training data is linearly separable, since "There is no error on the training set". Given the 2 facts, if C values changes within a reasonable range, the optimal hyperplane will just randomly shifting by a small amount within the margin(the gap formed by the support vectors). Intuitively, suppose the margin on training data is small, and/or there is no test data points within the margin too, the shifting of the optimal hyperplane within the margin will not affect classification error of the test set. Nonetheless, if you set C=0, then SVM will ignore the errors, and just try to minimise the sum of squares of the weights(w), perhaps you may get different results on the test set.
What is the influence of C in SVMs with linear kernel?
The answers above are excellent. After carefully reading your questions, I found there are 2 important facts we might overlooked. You are using linear kernel Your training data is linearly separable,
What is the influence of C in SVMs with linear kernel? The answers above are excellent. After carefully reading your questions, I found there are 2 important facts we might overlooked. You are using linear kernel Your training data is linearly separable, since "There is no error on the training set". Given the 2 facts, if C values changes within a reasonable range, the optimal hyperplane will just randomly shifting by a small amount within the margin(the gap formed by the support vectors). Intuitively, suppose the margin on training data is small, and/or there is no test data points within the margin too, the shifting of the optimal hyperplane within the margin will not affect classification error of the test set. Nonetheless, if you set C=0, then SVM will ignore the errors, and just try to minimise the sum of squares of the weights(w), perhaps you may get different results on the test set.
What is the influence of C in SVMs with linear kernel? The answers above are excellent. After carefully reading your questions, I found there are 2 important facts we might overlooked. You are using linear kernel Your training data is linearly separable,
761
What is the influence of C in SVMs with linear kernel?
Most of the answers above are quite good, but let me clarify something for someone like me who had to spent 3 days on understanding the role Parameter C in SVM because of diffrent sources. In book ISLR(http://faculty.marshall.usc.edu/gareth-james/ISL/) larger C means larger misclassification are allowed which makes margin wider and smaller C means less misclassification is allowed which leads to small margin. Whereas every other resource i have read and in python documentation it is just the opposite. Actually is ISLR C is defined as the upper bound of the sum of all slack variables. But in python and other source(https://shuzhanfan.github.io/2018/05/understanding-mathematics-behind-support-vector-machines/#:~:text=In%20terms%20of%20the%20SVM,%2Bb)%E2%88%921%5D.) C is constraints on slack variables.If we set C to positive infinite, we will get the same result as the Hard Margin SVM. On the contrary, if we set C to 0, there will be no constraint anymore, and we will end up with a hyperplane not classifying anything. The rules of thumb are: small values of C will result in a wider margin, at the cost of some misclassifications; large values of C will give you the Hard Margin classifier and tolerates zero constraint violation
What is the influence of C in SVMs with linear kernel?
Most of the answers above are quite good, but let me clarify something for someone like me who had to spent 3 days on understanding the role Parameter C in SVM because of diffrent sources. In book ISL
What is the influence of C in SVMs with linear kernel? Most of the answers above are quite good, but let me clarify something for someone like me who had to spent 3 days on understanding the role Parameter C in SVM because of diffrent sources. In book ISLR(http://faculty.marshall.usc.edu/gareth-james/ISL/) larger C means larger misclassification are allowed which makes margin wider and smaller C means less misclassification is allowed which leads to small margin. Whereas every other resource i have read and in python documentation it is just the opposite. Actually is ISLR C is defined as the upper bound of the sum of all slack variables. But in python and other source(https://shuzhanfan.github.io/2018/05/understanding-mathematics-behind-support-vector-machines/#:~:text=In%20terms%20of%20the%20SVM,%2Bb)%E2%88%921%5D.) C is constraints on slack variables.If we set C to positive infinite, we will get the same result as the Hard Margin SVM. On the contrary, if we set C to 0, there will be no constraint anymore, and we will end up with a hyperplane not classifying anything. The rules of thumb are: small values of C will result in a wider margin, at the cost of some misclassifications; large values of C will give you the Hard Margin classifier and tolerates zero constraint violation
What is the influence of C in SVMs with linear kernel? Most of the answers above are quite good, but let me clarify something for someone like me who had to spent 3 days on understanding the role Parameter C in SVM because of diffrent sources. In book ISL
762
What is the influence of C in SVMs with linear kernel?
C Parameter is used for controlling the outliers — low C implies we are allowing more outliers, high C implies we are allowing fewer outliers.
What is the influence of C in SVMs with linear kernel?
C Parameter is used for controlling the outliers — low C implies we are allowing more outliers, high C implies we are allowing fewer outliers.
What is the influence of C in SVMs with linear kernel? C Parameter is used for controlling the outliers — low C implies we are allowing more outliers, high C implies we are allowing fewer outliers.
What is the influence of C in SVMs with linear kernel? C Parameter is used for controlling the outliers — low C implies we are allowing more outliers, high C implies we are allowing fewer outliers.
763
What is the influence of C in SVMs with linear kernel?
High C (cost) means the cost of misclassification is increased. This means a flexible kernel will become more squiggly to avoid misclassifying observations in the training set. If the kernel is to squiggly the model won't generalize well when predicting on new data. If the kernel is to straight the model won't generalize well when predicting on new data.
What is the influence of C in SVMs with linear kernel?
High C (cost) means the cost of misclassification is increased. This means a flexible kernel will become more squiggly to avoid misclassifying observations in the training set. If the kernel is to squ
What is the influence of C in SVMs with linear kernel? High C (cost) means the cost of misclassification is increased. This means a flexible kernel will become more squiggly to avoid misclassifying observations in the training set. If the kernel is to squiggly the model won't generalize well when predicting on new data. If the kernel is to straight the model won't generalize well when predicting on new data.
What is the influence of C in SVMs with linear kernel? High C (cost) means the cost of misclassification is increased. This means a flexible kernel will become more squiggly to avoid misclassifying observations in the training set. If the kernel is to squ
764
How does Keras 'Embedding' layer work?
In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. That is the reason why you need to specify the size of the vocabulary as the first argument (so the table can be initialized). The most common application of this layer is for text processing. Let's see a simple example. Our training set consists only of two phrases: Hope to see you soon Nice to see you again So we can encode these phrases by assigning each word a unique integer number (by order of appearance in our training dataset for example). Then our phrases could be rewritten as: [0, 1, 2, 3, 4] [5, 1, 2, 3, 6] Now imagine we want to train a network whose first layer is an embedding layer. In this case, we should initialize it as follows: Embedding(7, 2, input_length=5) The first argument (7) is the number of distinct words in the training set. The second argument (2) indicates the size of the embedding vectors. The input_length argument, of course, determines the size of each input sequence. Once the network has been trained, we can get the weights of the embedding layer, which in this case will be of size (7, 2) and can be thought as the table used to map integers to embedding vectors: +------------+------------+ | index | Embedding | +------------+------------+ | 0 | [1.2, 3.1] | | 1 | [0.1, 4.2] | | 2 | [1.0, 3.1] | | 3 | [0.3, 2.1] | | 4 | [2.2, 1.4] | | 5 | [0.7, 1.7] | | 6 | [4.1, 2.0] | +------------+------------+ So according to these embeddings, our second training phrase will be represented as: [[0.7, 1.7], [0.1, 4.2], [1.0, 3.1], [0.3, 2.1], [4.1, 2.0]] It might seem counterintuitive at first, but the underlying automatic differentiation engines (e.g., Tensorflow or Theano) manage to optimize these vectors associated with each input integer just like any other parameter of your model. For an intuition of how this table lookup is implemented as a mathematical operation which can be handled by the automatic differentiation engines, consider the embeddings table from the example as a (7, 2) matrix. Then, for a given word, you create a one-hot vector based on its index and multiply it by the embeddings matrix, effectively replicating a lookup. For instance, for the word "soon" the index is 4, and the one-hot vector is [0, 0, 0, 0, 1, 0, 0]. If you multiply this (1, 7) matrix by the (7, 2) embeddings matrix you get the desired two-dimensional embedding, which in this case is [2.2, 1.4]. It is also interesting to use the embeddings learned by other methods/people in different domains (see https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html) as done in [1]. [1] López-Sánchez, D., Herrero, J. R., Arrieta, A. G., & Corchado, J. M. Hybridizing metric learning and case-based reasoning for adaptable clickbait detection. Applied Intelligence, 1-16.
How does Keras 'Embedding' layer work?
In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. Tha
How does Keras 'Embedding' layer work? In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. That is the reason why you need to specify the size of the vocabulary as the first argument (so the table can be initialized). The most common application of this layer is for text processing. Let's see a simple example. Our training set consists only of two phrases: Hope to see you soon Nice to see you again So we can encode these phrases by assigning each word a unique integer number (by order of appearance in our training dataset for example). Then our phrases could be rewritten as: [0, 1, 2, 3, 4] [5, 1, 2, 3, 6] Now imagine we want to train a network whose first layer is an embedding layer. In this case, we should initialize it as follows: Embedding(7, 2, input_length=5) The first argument (7) is the number of distinct words in the training set. The second argument (2) indicates the size of the embedding vectors. The input_length argument, of course, determines the size of each input sequence. Once the network has been trained, we can get the weights of the embedding layer, which in this case will be of size (7, 2) and can be thought as the table used to map integers to embedding vectors: +------------+------------+ | index | Embedding | +------------+------------+ | 0 | [1.2, 3.1] | | 1 | [0.1, 4.2] | | 2 | [1.0, 3.1] | | 3 | [0.3, 2.1] | | 4 | [2.2, 1.4] | | 5 | [0.7, 1.7] | | 6 | [4.1, 2.0] | +------------+------------+ So according to these embeddings, our second training phrase will be represented as: [[0.7, 1.7], [0.1, 4.2], [1.0, 3.1], [0.3, 2.1], [4.1, 2.0]] It might seem counterintuitive at first, but the underlying automatic differentiation engines (e.g., Tensorflow or Theano) manage to optimize these vectors associated with each input integer just like any other parameter of your model. For an intuition of how this table lookup is implemented as a mathematical operation which can be handled by the automatic differentiation engines, consider the embeddings table from the example as a (7, 2) matrix. Then, for a given word, you create a one-hot vector based on its index and multiply it by the embeddings matrix, effectively replicating a lookup. For instance, for the word "soon" the index is 4, and the one-hot vector is [0, 0, 0, 0, 1, 0, 0]. If you multiply this (1, 7) matrix by the (7, 2) embeddings matrix you get the desired two-dimensional embedding, which in this case is [2.2, 1.4]. It is also interesting to use the embeddings learned by other methods/people in different domains (see https://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html) as done in [1]. [1] López-Sánchez, D., Herrero, J. R., Arrieta, A. G., & Corchado, J. M. Hybridizing metric learning and case-based reasoning for adaptable clickbait detection. Applied Intelligence, 1-16.
How does Keras 'Embedding' layer work? In fact, the output vectors are not computed from the input using any mathematical operation. Instead, each input integer is used as the index to access a table that contains all possible vectors. Tha
765
How does Keras 'Embedding' layer work?
I also had the same question and after reading a couple of posts and materials I think I figured out what embedding layer role is. I think this post is also helpful to understand, however, I really find Daniel's answer convenient to digest. But I also got the idea behind it mainly by understanding the embedding words. I believe it's inaccurate to say embedding layers reduce one-hot encoding input down to fewer inputs. After all the one-hot vector is a one-dimensional data and it is indeed turned into 2 dimensions in our case. Better to be said that embedding layer comes up with a relation of the inputs in another dimension Whether it's in 2 dimensions or even higher. I also find a very interesting similarity between word embedding to the Principal Component Analysis. Although the name might look complicated the concept is straightforward. What PCA does is to define a set of data based on some general rules (so-called principle components). So it's like having a data and you want to describe it but using only 2 components. Which in this sense is very similar to word embeddings. They both do the same-alike job in different context. You can find out more here. I hope maybe understanding PCA helps understanding embedding layers through analogy. To wrap up, the answer to the original question of the post that "how does it calculate the value?" would be: Basically, our neural network captures underlying structure of the inputs (our sentences) and puts relation between words in our vocabulary into a higher dimension (let's say 2) by optimization. Deeper understanding would say that the frequency of each word appearing with another word from our vocabulary influences (in a very naive approach we can calculate it by hand) Aforementioned frequency could be one of many underlying structures that NN can capture You can find the intuition on the youtube link explaining the word embeddings
How does Keras 'Embedding' layer work?
I also had the same question and after reading a couple of posts and materials I think I figured out what embedding layer role is. I think this post is also helpful to understand, however, I really fi
How does Keras 'Embedding' layer work? I also had the same question and after reading a couple of posts and materials I think I figured out what embedding layer role is. I think this post is also helpful to understand, however, I really find Daniel's answer convenient to digest. But I also got the idea behind it mainly by understanding the embedding words. I believe it's inaccurate to say embedding layers reduce one-hot encoding input down to fewer inputs. After all the one-hot vector is a one-dimensional data and it is indeed turned into 2 dimensions in our case. Better to be said that embedding layer comes up with a relation of the inputs in another dimension Whether it's in 2 dimensions or even higher. I also find a very interesting similarity between word embedding to the Principal Component Analysis. Although the name might look complicated the concept is straightforward. What PCA does is to define a set of data based on some general rules (so-called principle components). So it's like having a data and you want to describe it but using only 2 components. Which in this sense is very similar to word embeddings. They both do the same-alike job in different context. You can find out more here. I hope maybe understanding PCA helps understanding embedding layers through analogy. To wrap up, the answer to the original question of the post that "how does it calculate the value?" would be: Basically, our neural network captures underlying structure of the inputs (our sentences) and puts relation between words in our vocabulary into a higher dimension (let's say 2) by optimization. Deeper understanding would say that the frequency of each word appearing with another word from our vocabulary influences (in a very naive approach we can calculate it by hand) Aforementioned frequency could be one of many underlying structures that NN can capture You can find the intuition on the youtube link explaining the word embeddings
How does Keras 'Embedding' layer work? I also had the same question and after reading a couple of posts and materials I think I figured out what embedding layer role is. I think this post is also helpful to understand, however, I really fi
766
How does Keras 'Embedding' layer work?
If you're more interested in the "mechanics", the embedding layer is basically a matrix which can be considered a transformation from your discrete and sparse 1-hot-vector into a continuous and dense latent space. Only to save the computation, you don't actually do the matrix multiplication, as it is redundant in the case of 1-hot-vectors. So, say you have a vocabulary size of 5000, as your input dimension - and you want to find a 256 dimension output representation of it - you will have a (5000,256) shape matrix, which you "should" multiply your 1-hot vector representation to get the latent vector. Only in practice instead of multiplying you just take the index... Source: Andrew Ng (One way that helps me think of it in theory, is as a Dense layer only without bias or activation... ) The weights of this matrix are learned through training - you could train it as a Word2Vec, GloVe, etc. - or on the specific task that you are dealing with. Or you can load pre-trained weights (say GloVe) and continue training on your specific task.
How does Keras 'Embedding' layer work?
If you're more interested in the "mechanics", the embedding layer is basically a matrix which can be considered a transformation from your discrete and sparse 1-hot-vector into a continuous and dense
How does Keras 'Embedding' layer work? If you're more interested in the "mechanics", the embedding layer is basically a matrix which can be considered a transformation from your discrete and sparse 1-hot-vector into a continuous and dense latent space. Only to save the computation, you don't actually do the matrix multiplication, as it is redundant in the case of 1-hot-vectors. So, say you have a vocabulary size of 5000, as your input dimension - and you want to find a 256 dimension output representation of it - you will have a (5000,256) shape matrix, which you "should" multiply your 1-hot vector representation to get the latent vector. Only in practice instead of multiplying you just take the index... Source: Andrew Ng (One way that helps me think of it in theory, is as a Dense layer only without bias or activation... ) The weights of this matrix are learned through training - you could train it as a Word2Vec, GloVe, etc. - or on the specific task that you are dealing with. Or you can load pre-trained weights (say GloVe) and continue training on your specific task.
How does Keras 'Embedding' layer work? If you're more interested in the "mechanics", the embedding layer is basically a matrix which can be considered a transformation from your discrete and sparse 1-hot-vector into a continuous and dense
767
Statistics Jokes
A guy is flying in a hot air balloon and he's lost. So he lowers himself over a field and shouts to a guy on the ground: "Can you tell me where I am, and which way I'm headed?" "Sure! You're at 43 degrees, 12 minutes, 21.2 seconds north; 123 degrees, 8 minutes, 12.8 seconds west. You're at 212 meters above sea level. Right now, you're hovering, but on your way in here you were at a speed of 1.83 meters per second at 1.929 radians" "Thanks! By the way, are you a statistician?" "I am! But how did you know?" "Everything you've told me is completely accurate; you gave me more detail than I needed, and you told me in such a way that it's no use to me at all!" "Dang! By the way, are you a principal investigator?" "Geeze! How'd you know that????" "You don't know where you are, you don't know where you're going. You got where you are by blowing hot air, you start asking questions after you get into trouble, and you're in exactly the same spot you were a few minutes ago, but now, somehow, it's my fault!
Statistics Jokes
A guy is flying in a hot air balloon and he's lost. So he lowers himself over a field and shouts to a guy on the ground: "Can you tell me where I am, and which way I'm headed?" "Sure! You're at 43 de
Statistics Jokes A guy is flying in a hot air balloon and he's lost. So he lowers himself over a field and shouts to a guy on the ground: "Can you tell me where I am, and which way I'm headed?" "Sure! You're at 43 degrees, 12 minutes, 21.2 seconds north; 123 degrees, 8 minutes, 12.8 seconds west. You're at 212 meters above sea level. Right now, you're hovering, but on your way in here you were at a speed of 1.83 meters per second at 1.929 radians" "Thanks! By the way, are you a statistician?" "I am! But how did you know?" "Everything you've told me is completely accurate; you gave me more detail than I needed, and you told me in such a way that it's no use to me at all!" "Dang! By the way, are you a principal investigator?" "Geeze! How'd you know that????" "You don't know where you are, you don't know where you're going. You got where you are by blowing hot air, you start asking questions after you get into trouble, and you're in exactly the same spot you were a few minutes ago, but now, somehow, it's my fault!
Statistics Jokes A guy is flying in a hot air balloon and he's lost. So he lowers himself over a field and shouts to a guy on the ground: "Can you tell me where I am, and which way I'm headed?" "Sure! You're at 43 de
768
Statistics Jokes
A statistician's wife had twins. He was delighted. He rang the minister who was also delighted. "Bring them to church on Sunday and we'll baptize them," said the minister. "No," replied the statistician. "Baptize one. We'll keep the other as a control." STATS: The Magazine For Students of Statistics, Winter 1996, Number 15
Statistics Jokes
A statistician's wife had twins. He was delighted. He rang the minister who was also delighted. "Bring them to church on Sunday and we'll baptize them," said the minister. "No," replied the
Statistics Jokes A statistician's wife had twins. He was delighted. He rang the minister who was also delighted. "Bring them to church on Sunday and we'll baptize them," said the minister. "No," replied the statistician. "Baptize one. We'll keep the other as a control." STATS: The Magazine For Students of Statistics, Winter 1996, Number 15
Statistics Jokes A statistician's wife had twins. He was delighted. He rang the minister who was also delighted. "Bring them to church on Sunday and we'll baptize them," said the minister. "No," replied the
769
Statistics Jokes
I saw this posted as a comment on here somewhere: http://xkcd.com/552/ A: I used to think correlation implied causation. Then I took a statistics class. Now I don't. B: Sounds like the class helped. A: Well, maybe. Title text: Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'.
Statistics Jokes
I saw this posted as a comment on here somewhere: http://xkcd.com/552/ A: I used to think correlation implied causation. Then I took a statistics class. Now I don't. B: Sounds like the class helped.
Statistics Jokes I saw this posted as a comment on here somewhere: http://xkcd.com/552/ A: I used to think correlation implied causation. Then I took a statistics class. Now I don't. B: Sounds like the class helped. A: Well, maybe. Title text: Correlation doesn't imply causation, but it does waggle its eyebrows suggestively and gesture furtively while mouthing 'look over there'.
Statistics Jokes I saw this posted as a comment on here somewhere: http://xkcd.com/552/ A: I used to think correlation implied causation. Then I took a statistics class. Now I don't. B: Sounds like the class helped.
770
Statistics Jokes
George Burns said that "If you live to be one hundred, you've got it made. Very few people die past that age."
Statistics Jokes
George Burns said that "If you live to be one hundred, you've got it made. Very few people die past that age."
Statistics Jokes George Burns said that "If you live to be one hundred, you've got it made. Very few people die past that age."
Statistics Jokes George Burns said that "If you live to be one hundred, you've got it made. Very few people die past that age."
771
Statistics Jokes
Two statisticians were traveling in an airplane from LA to New York. About an hour into the flight, the pilot announced that they had lost an engine, but don’t worry, there are three left. However, instead of 5 hours it would take 7 hours to get to New York. A little later, he announced that a second engine failed, and they still had two left, but it would take 10 hours to get to New York. Somewhat later, the pilot again came on the intercom and announced that a third engine had died. Never fear, he announced, because the plane could fly on a single engine. However, it would now take 18 hours to get to New York. At this point, one statistician turned to the other and said, “Gee, I hope we don’t lose that last engine, or we’ll be up here forever!”
Statistics Jokes
Two statisticians were traveling in an airplane from LA to New York. About an hour into the flight, the pilot announced that they had lost an engine, but don’t worry, there are three left. However, in
Statistics Jokes Two statisticians were traveling in an airplane from LA to New York. About an hour into the flight, the pilot announced that they had lost an engine, but don’t worry, there are three left. However, instead of 5 hours it would take 7 hours to get to New York. A little later, he announced that a second engine failed, and they still had two left, but it would take 10 hours to get to New York. Somewhat later, the pilot again came on the intercom and announced that a third engine had died. Never fear, he announced, because the plane could fly on a single engine. However, it would now take 18 hours to get to New York. At this point, one statistician turned to the other and said, “Gee, I hope we don’t lose that last engine, or we’ll be up here forever!”
Statistics Jokes Two statisticians were traveling in an airplane from LA to New York. About an hour into the flight, the pilot announced that they had lost an engine, but don’t worry, there are three left. However, in
772
Statistics Jokes
One passed by Gary Ramseyer: Statistics play an important role in genetics. For instance, statistics prove that numbers of offspring is an inherited trait. If your parent didn't have any kids, odds are you won't either.
Statistics Jokes
One passed by Gary Ramseyer: Statistics play an important role in genetics. For instance, statistics prove that numbers of offspring is an inherited trait. If your parent didn't have any kids, odds ar
Statistics Jokes One passed by Gary Ramseyer: Statistics play an important role in genetics. For instance, statistics prove that numbers of offspring is an inherited trait. If your parent didn't have any kids, odds are you won't either.
Statistics Jokes One passed by Gary Ramseyer: Statistics play an important role in genetics. For instance, statistics prove that numbers of offspring is an inherited trait. If your parent didn't have any kids, odds ar
773
Statistics Jokes
From the CMU protest at G20: There are other pictures from the protest as well.
Statistics Jokes
From the CMU protest at G20: There are other pictures from the protest as well.
Statistics Jokes From the CMU protest at G20: There are other pictures from the protest as well.
Statistics Jokes From the CMU protest at G20: There are other pictures from the protest as well.
774
Statistics Jokes
Statistics may be dull, but it has its moments.
Statistics Jokes
Statistics may be dull, but it has its moments.
Statistics Jokes Statistics may be dull, but it has its moments.
Statistics Jokes Statistics may be dull, but it has its moments.
775
Statistics Jokes
A statistics major was completely hung over the day of his final exam. It was a true/false test, so he decided to flip a coin for the answers. The statistics professor watched the student the entire two hours as he was flipping the coin … writing the answer … flipping the coin … writing the answer. At the end of the two hours, everyone else had left the final except for the one student. The professor walks up to his desk and interrupts the student, saying, “Listen, I have seen that you did not study for this statistics test, you didn’t even open the exam. If you are just flipping a coin for your answer, what is taking you so long?” The student replies bitterly (as he is still flipping the coin), “Shhh! I am checking my answers!” I've posted a few others on my blog.
Statistics Jokes
A statistics major was completely hung over the day of his final exam. It was a true/false test, so he decided to flip a coin for the answers. The statistics professor watched the student the entire t
Statistics Jokes A statistics major was completely hung over the day of his final exam. It was a true/false test, so he decided to flip a coin for the answers. The statistics professor watched the student the entire two hours as he was flipping the coin … writing the answer … flipping the coin … writing the answer. At the end of the two hours, everyone else had left the final except for the one student. The professor walks up to his desk and interrupts the student, saying, “Listen, I have seen that you did not study for this statistics test, you didn’t even open the exam. If you are just flipping a coin for your answer, what is taking you so long?” The student replies bitterly (as he is still flipping the coin), “Shhh! I am checking my answers!” I've posted a few others on my blog.
Statistics Jokes A statistics major was completely hung over the day of his final exam. It was a true/false test, so he decided to flip a coin for the answers. The statistics professor watched the student the entire t
776
Statistics Jokes
A mathematician, a physicist and a statistician went hunting for deer. When they chanced upon one buck lounging about, the mathematician fired first, missing the buck's nose by a few inches. The physicist then tried his hand, and missed the tail by a wee bit. The statistician started jumping up and down saying "We got him! We got him!"
Statistics Jokes
A mathematician, a physicist and a statistician went hunting for deer. When they chanced upon one buck lounging about, the mathematician fired first, missing the buck's nose by a few inches. The physi
Statistics Jokes A mathematician, a physicist and a statistician went hunting for deer. When they chanced upon one buck lounging about, the mathematician fired first, missing the buck's nose by a few inches. The physicist then tried his hand, and missed the tail by a wee bit. The statistician started jumping up and down saying "We got him! We got him!"
Statistics Jokes A mathematician, a physicist and a statistician went hunting for deer. When they chanced upon one buck lounging about, the mathematician fired first, missing the buck's nose by a few inches. The physi
777
Statistics Jokes
"If you torture data enough it will confess" one of my professors
Statistics Jokes
"If you torture data enough it will confess" one of my professors
Statistics Jokes "If you torture data enough it will confess" one of my professors
Statistics Jokes "If you torture data enough it will confess" one of my professors
778
Statistics Jokes
A statistician confidently tried to cross a river that was 1 meter deep on average. He drowned.
Statistics Jokes
A statistician confidently tried to cross a river that was 1 meter deep on average. He drowned.
Statistics Jokes A statistician confidently tried to cross a river that was 1 meter deep on average. He drowned.
Statistics Jokes A statistician confidently tried to cross a river that was 1 meter deep on average. He drowned.
779
Statistics Jokes
Yo momma is so mean, she has no standard deviation!
Statistics Jokes
Yo momma is so mean, she has no standard deviation!
Statistics Jokes Yo momma is so mean, she has no standard deviation!
Statistics Jokes Yo momma is so mean, she has no standard deviation!
780
Statistics Jokes
I once asked out a statistician. She failed to reject me.
Statistics Jokes
I once asked out a statistician. She failed to reject me.
Statistics Jokes I once asked out a statistician. She failed to reject me.
Statistics Jokes I once asked out a statistician. She failed to reject me.
781
Statistics Jokes
If you choose an answer to this question at random, what is the chance you will be correct? A) 25% B) 50% C) 60% D) 25% (was published on ANZSTAT mailing list a couple of days ago).
Statistics Jokes
If you choose an answer to this question at random, what is the chance you will be correct? A) 25% B) 50% C) 60% D) 25% (was published on ANZSTAT mailing list a couple of days ago).
Statistics Jokes If you choose an answer to this question at random, what is the chance you will be correct? A) 25% B) 50% C) 60% D) 25% (was published on ANZSTAT mailing list a couple of days ago).
Statistics Jokes If you choose an answer to this question at random, what is the chance you will be correct? A) 25% B) 50% C) 60% D) 25% (was published on ANZSTAT mailing list a couple of days ago).
782
Statistics Jokes
This is actually a quote that (unintendedly) happens to be a joke: "Every American should have above average income, and my Administration is going to see they get it." (Bill Clinton on campaign trail)
Statistics Jokes
This is actually a quote that (unintendedly) happens to be a joke: "Every American should have above average income, and my Administration is going to see they get it." (Bill Clinton on campaign trail
Statistics Jokes This is actually a quote that (unintendedly) happens to be a joke: "Every American should have above average income, and my Administration is going to see they get it." (Bill Clinton on campaign trail)
Statistics Jokes This is actually a quote that (unintendedly) happens to be a joke: "Every American should have above average income, and my Administration is going to see they get it." (Bill Clinton on campaign trail
783
Statistics Jokes
Why are open source statistical programming languages the best? Because they R.
Statistics Jokes
Why are open source statistical programming languages the best? Because they R.
Statistics Jokes Why are open source statistical programming languages the best? Because they R.
Statistics Jokes Why are open source statistical programming languages the best? Because they R.
784
Statistics Jokes
I thought I'd start the ball rolling with my favourite. "Being a statistician means never having to say you are certain."
Statistics Jokes
I thought I'd start the ball rolling with my favourite. "Being a statistician means never having to say you are certain."
Statistics Jokes I thought I'd start the ball rolling with my favourite. "Being a statistician means never having to say you are certain."
Statistics Jokes I thought I'd start the ball rolling with my favourite. "Being a statistician means never having to say you are certain."
785
Statistics Jokes
One day there was a fire in a wastebasket in the office of the Dean of Sciences. In rushed a physicist, a chemist, and a statistician. The physicist immediately starts to work on how much energy would have to be removed from the fire to stop the combustion. The chemist works on which reagent would have to be added to the fire to prevent oxidation. While they are doing this, the statistician is setting fires to all the other wastebaskets in the office. "What are you doing?" the others demand. The statistician replies, "Well, to solve the problem, you obviously need a larger sample size." Quoted by Steve Simon, www.pmean.com, and attributed to Gary C. Ramseyer's First Internet Gallery of Statistics Jokes at www.ilstu.edu/~gcramsey/Gallery.html.
Statistics Jokes
One day there was a fire in a wastebasket in the office of the Dean of Sciences. In rushed a physicist, a chemist, and a statistician. The physicist immediately starts to work on how much energy would
Statistics Jokes One day there was a fire in a wastebasket in the office of the Dean of Sciences. In rushed a physicist, a chemist, and a statistician. The physicist immediately starts to work on how much energy would have to be removed from the fire to stop the combustion. The chemist works on which reagent would have to be added to the fire to prevent oxidation. While they are doing this, the statistician is setting fires to all the other wastebaskets in the office. "What are you doing?" the others demand. The statistician replies, "Well, to solve the problem, you obviously need a larger sample size." Quoted by Steve Simon, www.pmean.com, and attributed to Gary C. Ramseyer's First Internet Gallery of Statistics Jokes at www.ilstu.edu/~gcramsey/Gallery.html.
Statistics Jokes One day there was a fire in a wastebasket in the office of the Dean of Sciences. In rushed a physicist, a chemist, and a statistician. The physicist immediately starts to work on how much energy would
786
Statistics Jokes
67% of statistics are made up.
Statistics Jokes
67% of statistics are made up.
Statistics Jokes 67% of statistics are made up.
Statistics Jokes 67% of statistics are made up.
787
Statistics Jokes
Here is a list of many fun statistics jokes (link) Here are just a few: Did you hear the one about the statistician? Probably.... It is proven that the celebration of birthdays is healthy. Statistics show that those people who celebrate the most birthdays become the oldest. -- S. den Hartog, Ph D. Thesis Universtity of Groningen. A statistician is a person who draws a mathematically precise line from an unwarranted assumption to a foregone conclusion. The average statistician is just plain mean. And there is also the one from a TED talk: "A friend asked my wife what I do. She answered that I model. Model what, she was asked - he models genes, she answered."
Statistics Jokes
Here is a list of many fun statistics jokes (link) Here are just a few: Did you hear the one about the statistician? Probably.... It is proven that the celebration of birthdays is healthy. Statistic
Statistics Jokes Here is a list of many fun statistics jokes (link) Here are just a few: Did you hear the one about the statistician? Probably.... It is proven that the celebration of birthdays is healthy. Statistics show that those people who celebrate the most birthdays become the oldest. -- S. den Hartog, Ph D. Thesis Universtity of Groningen. A statistician is a person who draws a mathematically precise line from an unwarranted assumption to a foregone conclusion. The average statistician is just plain mean. And there is also the one from a TED talk: "A friend asked my wife what I do. She answered that I model. Model what, she was asked - he models genes, she answered."
Statistics Jokes Here is a list of many fun statistics jokes (link) Here are just a few: Did you hear the one about the statistician? Probably.... It is proven that the celebration of birthdays is healthy. Statistic
788
Statistics Jokes
On average, every one of us has one testicle.
Statistics Jokes
On average, every one of us has one testicle.
Statistics Jokes On average, every one of us has one testicle.
Statistics Jokes On average, every one of us has one testicle.
789
Statistics Jokes
What question does the Cauchy distribution hate to be asked? Got a moment?
Statistics Jokes
What question does the Cauchy distribution hate to be asked? Got a moment?
Statistics Jokes What question does the Cauchy distribution hate to be asked? Got a moment?
Statistics Jokes What question does the Cauchy distribution hate to be asked? Got a moment?
790
Statistics Jokes
There are two kinds of people in the world: Those who can extrapolate from incomplete data sets.
Statistics Jokes
There are two kinds of people in the world: Those who can extrapolate from incomplete data sets.
Statistics Jokes There are two kinds of people in the world: Those who can extrapolate from incomplete data sets.
Statistics Jokes There are two kinds of people in the world: Those who can extrapolate from incomplete data sets.
791
Statistics Jokes
I found this list of quotes from Gelman's famous Bayesian Data Analysis book on this link. They are more like witty, stand-up one-liners but I enjoyed them a lot. Just a few below to whet your appetite: 1 "As you know from teaching introductory statistics, 30 is infinity." 2 "Suppose there's someone you want to get to know better, but you have to talk to all her friends too. They're like the nuisance parameters." 3 People don't go around introducing you to their ex-wives." (on why model improvement doesn't make it into papers)
Statistics Jokes
I found this list of quotes from Gelman's famous Bayesian Data Analysis book on this link. They are more like witty, stand-up one-liners but I enjoyed them a lot. Just a few below to whet your appetit
Statistics Jokes I found this list of quotes from Gelman's famous Bayesian Data Analysis book on this link. They are more like witty, stand-up one-liners but I enjoyed them a lot. Just a few below to whet your appetite: 1 "As you know from teaching introductory statistics, 30 is infinity." 2 "Suppose there's someone you want to get to know better, but you have to talk to all her friends too. They're like the nuisance parameters." 3 People don't go around introducing you to their ex-wives." (on why model improvement doesn't make it into papers)
Statistics Jokes I found this list of quotes from Gelman's famous Bayesian Data Analysis book on this link. They are more like witty, stand-up one-liners but I enjoyed them a lot. Just a few below to whet your appetit
792
Statistics Jokes
Statisticians do it with significance Biostatisticians do it with power Epidemiologists do it with populations Bayesians do it with a posterior
Statistics Jokes
Statisticians do it with significance Biostatisticians do it with power Epidemiologists do it with populations Bayesians do it with a posterior
Statistics Jokes Statisticians do it with significance Biostatisticians do it with power Epidemiologists do it with populations Bayesians do it with a posterior
Statistics Jokes Statisticians do it with significance Biostatisticians do it with power Epidemiologists do it with populations Bayesians do it with a posterior
793
Statistics Jokes
A statistic professor plans to travel to a conference by plane. When he passes the security check, they discover a bomb in his carry-on-baggage. Of course, he is hauled off immediately for interrogation. "I don't understand it!" the interrogating officer exclaims. "You're an accomplished professional, a caring family man, a pillar of your parish - and now you want to destroy that all by blowing up an airplane!" "Sorry", the professor interrupts him. "I had never intended to blow up the plane." "So, for what reason else did you try to bring a bomb on board?!" "Let me explain. Statistics shows that the probability of a bomb being on an airplane is 1/1000. That's quite high if you think about it - so high that I wouldn't have any peace of mind on a flight." "And what does this have to do with you bringing a bomb on board of a plane?" "You see, since the probability of one bomb being on my plane is 1/1000, the chance that there are two bombs is 1/1000000. If I already bring one, the chance of another bomb being around is actually 1/1000000, and I am much safer..."
Statistics Jokes
A statistic professor plans to travel to a conference by plane. When he passes the security check, they discover a bomb in his carry-on-baggage. Of course, he is hauled off immediately for interrogati
Statistics Jokes A statistic professor plans to travel to a conference by plane. When he passes the security check, they discover a bomb in his carry-on-baggage. Of course, he is hauled off immediately for interrogation. "I don't understand it!" the interrogating officer exclaims. "You're an accomplished professional, a caring family man, a pillar of your parish - and now you want to destroy that all by blowing up an airplane!" "Sorry", the professor interrupts him. "I had never intended to blow up the plane." "So, for what reason else did you try to bring a bomb on board?!" "Let me explain. Statistics shows that the probability of a bomb being on an airplane is 1/1000. That's quite high if you think about it - so high that I wouldn't have any peace of mind on a flight." "And what does this have to do with you bringing a bomb on board of a plane?" "You see, since the probability of one bomb being on my plane is 1/1000, the chance that there are two bombs is 1/1000000. If I already bring one, the chance of another bomb being around is actually 1/1000000, and I am much safer..."
Statistics Jokes A statistic professor plans to travel to a conference by plane. When he passes the security check, they discover a bomb in his carry-on-baggage. Of course, he is hauled off immediately for interrogati
794
Statistics Jokes
Did you hear about the General Motors test for autocorrelation? Or the General Mills test for serial correlation?
Statistics Jokes
Did you hear about the General Motors test for autocorrelation? Or the General Mills test for serial correlation?
Statistics Jokes Did you hear about the General Motors test for autocorrelation? Or the General Mills test for serial correlation?
Statistics Jokes Did you hear about the General Motors test for autocorrelation? Or the General Mills test for serial correlation?
795
Statistics Jokes
After enough alcohol all statisticians tend to become Bayesians: we start making inferences from our posterior
Statistics Jokes
After enough alcohol all statisticians tend to become Bayesians: we start making inferences from our posterior
Statistics Jokes After enough alcohol all statisticians tend to become Bayesians: we start making inferences from our posterior
Statistics Jokes After enough alcohol all statisticians tend to become Bayesians: we start making inferences from our posterior
796
Statistics Jokes
Statisticians get paid to make errors.
Statistics Jokes
Statisticians get paid to make errors.
Statistics Jokes Statisticians get paid to make errors.
Statistics Jokes Statisticians get paid to make errors.
797
Can a probability distribution value exceeding 1 be OK?
That Wiki page is abusing language by referring to this number as a probability. You are correct that it is not. It is actually a probability per foot. Specifically, the value of 1.5789 (for a height of 6 feet) implies that the probability of a height between, say, 5.99 and 6.01 feet is close to the following unitless value: $$1.5789\, [1/\text{foot}] \times (6.01 - 5.99)\, [\text{feet}] = 0.0316$$ This value must not exceed 1, as you know. (The small range of heights (0.02 in this example) is a crucial part of the probability apparatus. It is the "differential" of height, which I will abbreviate $d(\text{height})$.) Probabilities per unit of something are called densities by analogy to other densities, like mass per unit volume. Bona fide probability densities can have arbitrarily large values, even infinite ones. This example shows the probability density function for a Gamma distribution (with shape parameter of $3/2$ and scale of $1/5$). Because most of the density is less than $1$, the curve has to rise higher than $1$ in order to have a total area of $1$ as required for all probability distributions. This density (for a beta distribution with parameters $1/2, 1/10$) becomes infinite at $0$ and at $1$. The total area still is finite (and equals $1$)! The value of 1.5789 /foot is obtained in that example by estimating that the heights of males have a normal distribution with mean 5.855 feet and variance 3.50e-2 square feet. (This can be found in a previous table.) The square root of that variance is the standard deviation, 0.18717 feet. We re-express 6 feet as the number of SDs from the mean: $$z = (6 - 5.855) / 0.18717 = 0.7747$$ The division by the standard deviation produces a relation $$dz = d(\text{height})/0.18717$$ The Normal probability density, by definition, equals $$\frac{1}{\sqrt{2 \pi}}\exp(-z^2/2)dz = 0.29544\ d(\text{height}) / 0.18717 = 1.5789\ d(\text{height}).$$ (Actually, I cheated: I simply asked Excel to compute NORMDIST(6, 5.855, 0.18717, FALSE). But then I really did check it against the formula, just to be sure.) When we strip the essential differential $d(\text{height})$ from the formula only the number $1.5789$ remains, like the Cheshire Cat's smile. We, the readers, need to understand that the number has to be multiplied by a small difference in heights in order to produce a probability.
Can a probability distribution value exceeding 1 be OK?
That Wiki page is abusing language by referring to this number as a probability. You are correct that it is not. It is actually a probability per foot. Specifically, the value of 1.5789 (for a heig
Can a probability distribution value exceeding 1 be OK? That Wiki page is abusing language by referring to this number as a probability. You are correct that it is not. It is actually a probability per foot. Specifically, the value of 1.5789 (for a height of 6 feet) implies that the probability of a height between, say, 5.99 and 6.01 feet is close to the following unitless value: $$1.5789\, [1/\text{foot}] \times (6.01 - 5.99)\, [\text{feet}] = 0.0316$$ This value must not exceed 1, as you know. (The small range of heights (0.02 in this example) is a crucial part of the probability apparatus. It is the "differential" of height, which I will abbreviate $d(\text{height})$.) Probabilities per unit of something are called densities by analogy to other densities, like mass per unit volume. Bona fide probability densities can have arbitrarily large values, even infinite ones. This example shows the probability density function for a Gamma distribution (with shape parameter of $3/2$ and scale of $1/5$). Because most of the density is less than $1$, the curve has to rise higher than $1$ in order to have a total area of $1$ as required for all probability distributions. This density (for a beta distribution with parameters $1/2, 1/10$) becomes infinite at $0$ and at $1$. The total area still is finite (and equals $1$)! The value of 1.5789 /foot is obtained in that example by estimating that the heights of males have a normal distribution with mean 5.855 feet and variance 3.50e-2 square feet. (This can be found in a previous table.) The square root of that variance is the standard deviation, 0.18717 feet. We re-express 6 feet as the number of SDs from the mean: $$z = (6 - 5.855) / 0.18717 = 0.7747$$ The division by the standard deviation produces a relation $$dz = d(\text{height})/0.18717$$ The Normal probability density, by definition, equals $$\frac{1}{\sqrt{2 \pi}}\exp(-z^2/2)dz = 0.29544\ d(\text{height}) / 0.18717 = 1.5789\ d(\text{height}).$$ (Actually, I cheated: I simply asked Excel to compute NORMDIST(6, 5.855, 0.18717, FALSE). But then I really did check it against the formula, just to be sure.) When we strip the essential differential $d(\text{height})$ from the formula only the number $1.5789$ remains, like the Cheshire Cat's smile. We, the readers, need to understand that the number has to be multiplied by a small difference in heights in order to produce a probability.
Can a probability distribution value exceeding 1 be OK? That Wiki page is abusing language by referring to this number as a probability. You are correct that it is not. It is actually a probability per foot. Specifically, the value of 1.5789 (for a heig
798
Can a probability distribution value exceeding 1 be OK?
This is a common mistake from not understanding the difference between probability mass functions, where the variable is discrete, and probability density functions, where the variable is continuous. See What is a probability distribution: continuous probability functions are defined for an infinite number of points over a continuous interval, the probability at a single point is always zero. Probabilities are measured over intervals, not single points. That is, the area under the curve between two distinct points defines the probability for that interval. This means that the height of the probability function can in fact be greater than one. The property that the integral must equal one is equivalent to the property for discrete distributions that the sum of all the probabilities must equal one.
Can a probability distribution value exceeding 1 be OK?
This is a common mistake from not understanding the difference between probability mass functions, where the variable is discrete, and probability density functions, where the variable is continuous.
Can a probability distribution value exceeding 1 be OK? This is a common mistake from not understanding the difference between probability mass functions, where the variable is discrete, and probability density functions, where the variable is continuous. See What is a probability distribution: continuous probability functions are defined for an infinite number of points over a continuous interval, the probability at a single point is always zero. Probabilities are measured over intervals, not single points. That is, the area under the curve between two distinct points defines the probability for that interval. This means that the height of the probability function can in fact be greater than one. The property that the integral must equal one is equivalent to the property for discrete distributions that the sum of all the probabilities must equal one.
Can a probability distribution value exceeding 1 be OK? This is a common mistake from not understanding the difference between probability mass functions, where the variable is discrete, and probability density functions, where the variable is continuous.
799
Can a probability distribution value exceeding 1 be OK?
I think that a continuous uniform distribution over an interval $[a,b]$ provides a straightforward example for this question: In a continuous uniform distribution the density in each point is the same at each point (uniform distribution). Moreover, because the area below the rectangle must be one (just as the area below the normal curve must be one) that density value must be $1/(b-a)$ because any rectangle with base $b-a$ and area $1$ must have height $1/(b-a)$ . So the value for the uniform density on the interval $[0,0.5]$ is $1/(0.5-0)=2$, on the interval $[0,0.1]$ it is $10$, ...
Can a probability distribution value exceeding 1 be OK?
I think that a continuous uniform distribution over an interval $[a,b]$ provides a straightforward example for this question: In a continuous uniform distribution the density in each point is the same
Can a probability distribution value exceeding 1 be OK? I think that a continuous uniform distribution over an interval $[a,b]$ provides a straightforward example for this question: In a continuous uniform distribution the density in each point is the same at each point (uniform distribution). Moreover, because the area below the rectangle must be one (just as the area below the normal curve must be one) that density value must be $1/(b-a)$ because any rectangle with base $b-a$ and area $1$ must have height $1/(b-a)$ . So the value for the uniform density on the interval $[0,0.5]$ is $1/(0.5-0)=2$, on the interval $[0,0.1]$ it is $10$, ...
Can a probability distribution value exceeding 1 be OK? I think that a continuous uniform distribution over an interval $[a,b]$ provides a straightforward example for this question: In a continuous uniform distribution the density in each point is the same
800
Can a probability distribution value exceeding 1 be OK?
I don't know whether the Wikipedia article has been edited subsequent to the initial posts in this thread, but it now says "Note that a value greater than 1 is OK here – it is a probability density rather than a probability, because height is a continuous variable.", and at least in this immediate context, P is used for probability and p is used for probability density. Yes, very sloppy since the article uses p in some places to mean probability, and in other places as probability density. Back to the original question "Can a probability distribution value exceeding 1 be OK?" No, but I've seen it done (see my last paragraph below). Here's how to interpret a probability > 1. First of all, note that people can and do give a 150% effort, as we often hear in sports and sometimes work https://www.youtube.com/watch?v=br_vSdAOHQQ . If you're sure something will happen, that's a probability of 1. A probability of 1.5 could be interpreted as you're 150% sure the event will happen - kind of like giving a 150% effort. And if you can have a probability > 1, I suppose you can have a probability < 0. Negative probabilities can be interpreted as follows. A probability of 0.001 means there's almost no chance of the event happening. Probability = 0 means "no way". A negative probability, such as -1.2, corresponds to "You gots to be kidding". When I was a wee lad just out of school 3 decades ago, I witnessed an event more astounding than breaking the sound barrier in aviation, namely, breaking the unity barrier in probability. An analyst with a Ph.D. in Physics had spent 2 years full-time (probably giving 150%) developing a model for calculating the probability of detecting object X, at the end of which his model and analysis successfully completed peer review by several scientists and engineers closely affiliated to the U.S. government. I won't tell you what object X is, but object X, and the probability of detecting it, was and still is of considerable interest to the U.S. government. The model included a formula for $P_y$ = Prob(event y happens). $P_y$ and some other terms all combined into the final formula, which was Prob(object X is detected). Indeed, computed values of Prob(object X is detected) were within the range of [0,1], as is "traditional" in probability in the Kolmogorov tradition. $P_y$ in its original form was always in [0,1] and involved "garden-variety" transcendental functions which were available in standard Fortran or any scientific calculator. However, for a reason known only but to the analyst and God (perhaps because he had seen it done in his Physics classes and books, but did not know that he was shown the few cases where it works, not the many more where it does not, and this guy's name and scientific/mathematical judgment did not happen to be that of Dirac), he chose to take a two term Taylor expansion of $P_y$ (and ignore the remainder term), which will henceforth be referred to as $P_y$. It was this two term Taylor expansion of $P_y$ which was inserted into the final expression for Prob(object X is detected). What he did not realize, until I pointed it out to him, was that $P_y$ was equal to approximately 1.2 using his base case values for all parameters. Indeed it was possible for $P_y$ to go up to about 1.8. And that's how the unity barrier was broken in probability. But the guy didn't know he had accomplished this pioneering feat until I pointed it out to him, having just performed quick calculations on a battery-powered credit card size Casio scientific calculator in a darkened conference room (couldn't have done it with a solar-powered calculator). That would be kind of like Chuck Yeager going out for a Sunday spin in his plane, and only being informed months later that he had broken the sound barrier.
Can a probability distribution value exceeding 1 be OK?
I don't know whether the Wikipedia article has been edited subsequent to the initial posts in this thread, but it now says "Note that a value greater than 1 is OK here – it is a probability density ra
Can a probability distribution value exceeding 1 be OK? I don't know whether the Wikipedia article has been edited subsequent to the initial posts in this thread, but it now says "Note that a value greater than 1 is OK here – it is a probability density rather than a probability, because height is a continuous variable.", and at least in this immediate context, P is used for probability and p is used for probability density. Yes, very sloppy since the article uses p in some places to mean probability, and in other places as probability density. Back to the original question "Can a probability distribution value exceeding 1 be OK?" No, but I've seen it done (see my last paragraph below). Here's how to interpret a probability > 1. First of all, note that people can and do give a 150% effort, as we often hear in sports and sometimes work https://www.youtube.com/watch?v=br_vSdAOHQQ . If you're sure something will happen, that's a probability of 1. A probability of 1.5 could be interpreted as you're 150% sure the event will happen - kind of like giving a 150% effort. And if you can have a probability > 1, I suppose you can have a probability < 0. Negative probabilities can be interpreted as follows. A probability of 0.001 means there's almost no chance of the event happening. Probability = 0 means "no way". A negative probability, such as -1.2, corresponds to "You gots to be kidding". When I was a wee lad just out of school 3 decades ago, I witnessed an event more astounding than breaking the sound barrier in aviation, namely, breaking the unity barrier in probability. An analyst with a Ph.D. in Physics had spent 2 years full-time (probably giving 150%) developing a model for calculating the probability of detecting object X, at the end of which his model and analysis successfully completed peer review by several scientists and engineers closely affiliated to the U.S. government. I won't tell you what object X is, but object X, and the probability of detecting it, was and still is of considerable interest to the U.S. government. The model included a formula for $P_y$ = Prob(event y happens). $P_y$ and some other terms all combined into the final formula, which was Prob(object X is detected). Indeed, computed values of Prob(object X is detected) were within the range of [0,1], as is "traditional" in probability in the Kolmogorov tradition. $P_y$ in its original form was always in [0,1] and involved "garden-variety" transcendental functions which were available in standard Fortran or any scientific calculator. However, for a reason known only but to the analyst and God (perhaps because he had seen it done in his Physics classes and books, but did not know that he was shown the few cases where it works, not the many more where it does not, and this guy's name and scientific/mathematical judgment did not happen to be that of Dirac), he chose to take a two term Taylor expansion of $P_y$ (and ignore the remainder term), which will henceforth be referred to as $P_y$. It was this two term Taylor expansion of $P_y$ which was inserted into the final expression for Prob(object X is detected). What he did not realize, until I pointed it out to him, was that $P_y$ was equal to approximately 1.2 using his base case values for all parameters. Indeed it was possible for $P_y$ to go up to about 1.8. And that's how the unity barrier was broken in probability. But the guy didn't know he had accomplished this pioneering feat until I pointed it out to him, having just performed quick calculations on a battery-powered credit card size Casio scientific calculator in a darkened conference room (couldn't have done it with a solar-powered calculator). That would be kind of like Chuck Yeager going out for a Sunday spin in his plane, and only being informed months later that he had broken the sound barrier.
Can a probability distribution value exceeding 1 be OK? I don't know whether the Wikipedia article has been edited subsequent to the initial posts in this thread, but it now says "Note that a value greater than 1 is OK here – it is a probability density ra