text
stringlengths
83
79.5k
H: Autoencoder for cleaning outliers in a surface I have been looking at autoencoders from the keras blog here: https://blog.keras.io/building-autoencoders-in-keras.html I was wondering, what motifications would be necessary in order to be able to give it different surfaces i.e. 2-dimensional vectors, of which some of them have large spikes. For example here we see a surface that looks clean: How could a neural network look like, if I wanted to remove individual spikes from this surface? Am I right in thinking that a normal fully connected feed forward propagation would be sufficient? If so, is there any way to control thresholds when spikes are should be eliminated? Also, would you agree that the training principle would still be the same as show in the keras blog? Would it work if I simply trained it with many good examples of clean surfaces to recognize themselves? AI: A possible approach would be a denoising autoencoder. It is like a normal autoencoder but instead of training it using the same input and output, you inject noise on the input while keeping the expected output clean. Hence, the autoencoder learns to remove it. This kind of autoencoders are also described in the blog post you linked to. In your case, you could just train your denoising autoencoder injecting to the inputs spikes of the height you expect to be removed. About what kind of architecture (e.g. fully connected, convolutional), only actual tests can tell you what is appropriate and what is not.
H: Difference Between Feature Engineering and Feature Learning I am playing with features (input data) to improve my model's accuracy. If I have a raw time-series dataframe, does feature engineering mean extracting properties or characteristics of my raw data and feed it as input? Or will the algorithm learn these from the time-series itself? In other words, should I create a column that is comprised of the moving average, or will the algorithm pick up on moving average from the raw data? Is feature engineering just the munging of independent variables? Or is it extracting features that are dependent on other raw data? EDIT: Here's another question: If I have a categorical feature, would it be better to have it as a one-hot vector (say, 5 binary inputs), or to have it as one input with range [0,4]? How does one intuitively know the answer to these questions?? AI: Feature engineering refers to creating new information that was not there previously, often by using domain specific knowledge or by creating new features that are transformations of others you already have, such as adding interaction terms or as you state, moving averages. A model generally cannot 'pick up' on information it doesn't have, and that is where finesse and creativity comes into play. Whether you should one-hot or leave a feature as categorical depends on the modeling approach. Some, like randomForest will do fine with categorical predictors; others prefer recoding. Intuition on these questions comes with practice and experience. There's no substitute for trying out and comparing toy examples to see how your choices affect outcomes. You should take the time to do that, and intuition will follow.
H: Why do we need for Shortcut Connections to build Residual Networks? Why do we need for Shortcut Connections to build Residual Networks, and how it help to train neural networks for classification and detection? AI: Why do we need for Shortcut Connections to build Residual Networks, Because otherwise the network would not be a residual network. how [do residual connections] help to train neural networks for classification and detection? They add shortcuts for the gradient. This way the first few layers get updates very quick, the vanishing gradient problem is no longer a problem. So you can easily make networks with a 1000 layers. See Residual Networks paper. There is nothing specific about classification / detection. Its about very deep networks.
H: Nested IF Else in R - SAT/ACT test I have the following data set df <- data.frame(student=c(1,2,3,4,5,6,7,8,9), sat=c(365,0,545,630,385,410,0,655,0), act=c(28,20,0,0,16,17,35,29,21)) student sat act 1 365 28 2 0 20 3 545 0 4 630 0 5 385 16 6 410 17 7 0 35 8 655 29 9 0 21 and I'd like to create a new field with the following conditions If there is an SAT score > 0 use SAT score If SAT=0, then convert the ACT to an SAT score using the rubric here. (When there was a range in the SAT score, I just used the median. ACT SAT 8 200 9 210 10 220 11 225 12 250 13 285 14 325 15 360 16 385 17 410 18 440 19 465 20 485 21 505 22 525 23 545 24 560 25 575 26 595 27 615 28 635 29 655 30 675 31 700 32 725 33 750 34 775 35 790 36 800 This is one heck of an ifelse statement. I've tried this: df$newgrade=-ifelse(ACT=8,200, ifelse (ACT=9,210, ifelse(ACT=10,220, ifelse (ACT=11,225, ACT=12,250, ifelse(ACT=13,285, ifelse (ACT=14,325, ACT=15,D, ifelse(ACT=16,C, ifelse (ACT=17,B, ACT=18,D, ifelse(ACT=19,C, ifelse (ACT=20,B, ACT=21,D, ifelse(ACT=22,C, ifelse (ACT=23,B, ACT=24,D, ifelse(ACT=25,C, ifelse (ACT=26,B, ACT=27,D, ifelse(ACT=28,C, ifelse (ACT=29,B, ACT=30,D, ifelse(ACT=31,C, ifelse (ACT=32,B, ACT=33,D, ifelse(ACT=34,C, ifelse (ACT=35,B, ACT=36,D)))))))))))))))))))) I tried to follow the example at the bottom of this page but it didn't work. Someone else on another board suggested: df$newgrade<-ifelse(df$sat == 0, conversion$SAT[match(df$act, conversion$ACT)], df$sat) but then a new issue presented itself: If there is neither an ACT nor a SAT score. How can it put a 0 in for that group?? Thank you for any assistance you may bring. AI: You can indeed use the conversion table: conversion <- read.table(text = "ACT SAT 8 200 9 210 10 220 11 225 12 250 13 285 14 325 15 360 16 385 17 410 18 440 19 465 20 485 21 505 22 525 23 545 24 560 25 575 26 595 27 615 28 635 29 655 30 675 31 700 32 725 33 750 34 775 35 790 36 800", header = TRUE) With the help of this table and mathematical/logical operators, you can create the values: transform(df, newgrade = (sat | act) * (conversion$SAT[match(df$act, conversion$ACT)] ^ as.logical(act) * !sat) + sat) The result: student sat act newgrade 1 1 365 28 365 2 2 0 20 485 3 3 545 0 545 4 4 630 0 630 5 5 385 16 385 6 6 410 17 410 7 7 0 35 790 8 8 655 29 655 9 9 0 21 505 The value in newgrade will also be 0 if both sat and act are 0.
H: Categorise sentences based on their semantic similarity I have a set of unique sentences. For each sentence I calculate a semantic similarity score (between 0 to 1) with the remaining sentences as mentioned in the below example. E.g., Dataset = {sen1, sen2, sen3, sen4,..., senN} For sen1 I calculate pairwise semantic similarity scores as follows. sen1 and sen2 = 0.3 sen1 and sen3 = 0.7 sen1 and sen4 = 0.9 ... ... ... sen1 and senN = 1.0 Likewise for all the sentences I calculate pairwise semantic similarity scores. Since, I am getting a pairwise value, is it possible to cluster these sentences? Also what is the most appropriate clustering technique in my situation? (I want to cluster sentences based on the similarity value I have and also I consider values above 0.5 as semantically similar sentences.) AI: There are several techniques that you could apply in order to cluster data if your input is a matrix of pairwise distances between elements. As usual, the best option depends on your specific data, so it is hard to answer to the question of what is the best one, but you could try any of the following ones: The k-medoids algorithm is similar to the well-known k-means algorithm. After randomly choosing k of your sequences as initial cluster centers (initial medoids) and assigning each sequence to the closest medoid, you randomly reassign sequences to different clusters as long as the value of the cost function decreases. Hierarchical clustering is another example of clustering algorithm whose input is a matrix of pairwise distances between sequences. In this case the output is a dendrogram. Another option is to apply multidimensional scaling, a dimensionality reduction technique which input is a matrix of pairwise distances between sequences, to project your sequences into a 2D plane. Once you do that, you can apply any cluster algorithm you can think of, like for instance k-means. As I said, there are many other options, but these ones are the simplest ones I can think of, and the ones I would start from.
H: How to use Embedding() with 3D tensor in Keras? I have a list of stock price sequences with 20 timesteps each. That's a 2D array of shape (total_seq, 20). I can reshape it into (total_seq, 20, 1) for concatenation to other features. I also have news title with 10 words for each timestep. So I have 3D array of shape (total_seq, 20, 10) of the news' tokens from Tokenizer.texts_to_sequences() and sequence.pad_sequences(). I want to concatenate the news embedding to the stock price and make predictions. My idea is that the news embedding should return tensor of shape (total_seq, 20, embed_size) so that I can concatenate it with the stock price of shape (total_seq, 20, 1) then connect it to LSTM layers. To do that, I should convert news embedding of shape (total_seq, 20, 10) to (total_seq, 20, 10, embed_size) by using Embedding() function. But in Keras, the Embedding() function takes a 2D tensor instead of 3D tensor. How do I get around with this problem? Assume that Embedding() accepts 3D tensor, then after I get 4D tensor as output, I would remove the 3rd dimension by using LSTM to return last word's embedding only, so output of shape (total_seq, 20, 10, embed_size) would be converted to (total_seq, 20, embed_size) But I would encounter another problem again, LSTM accepts 3D tensor not 4D so How do I get around with Embedding and LSTM not accepting my inputs? AI: I'm not entirely sure if this is the cleanest solution but I stitched everything together. Each of the 10 word positions get their own input but that shouldn't be too much of a problem. The idea is to make an Embedding layer and use it multiple times. First we will generate some data: n_samples = 1000 time_series_length = 50 news_words = 10 news_embedding_dim = 16 word_cardinality = 50 x_time_series = np.random.rand(n_samples, time_series_length, 1) x_news_words = np.random.choice(np.arange(50), replace=True, size=(n_samples, time_series_length, news_words)) x_news_words = [x_news_words[:, :, i] for i in range(news_words)] y = np.random.randint(2, size=(n_samples)) Now we will define the layers: ## Input of normal time series time_series_input = Input(shape=(50, 1, ), name='time_series') ## For every word we have it's own input news_word_inputs = [Input(shape=(50, ), name='news_word_' + str(i + 1)) for i in range(news_words)] ## Shared embedding layer news_word_embedding = Embedding(word_cardinality, news_embedding_dim, input_length=time_series_length) ## Repeat this for every word position news_words_embeddings = [news_word_embedding(inp) for inp in news_word_inputs] ## Concatenate the time series input and the embedding outputs concatenated_inputs = concatenate([time_series_input] + news_words_embeddings, axis=-1) ## Feed into LSTM lstm = LSTM(16)(concatenated_inputs) ## Output, in this case single classification output = Dense(1, activation='sigmoid')(lstm) After compiling the model we can just fit it like this: model.fit([x_time_series] + x_news_words, y) EDIT: After what you mentioned in the comments, you can add a dense layer that summarizes the news, and adds that to your time series (stock prices): ## Summarize the news: news_words_concat = concatenate(news_words_embeddings, axis=-1) news_words_transformation = TimeDistributed(Dense(combined_news_embedding))(news_words_concat) ## New concat concatenated_inputs = concatenate([time_series_input, news_words_transformation], axis=-1)
H: Heat map and visualization I want to create a heat map to visualize some production data, but without geolocation. I am finishing some experiments in a greenhouse, divided in different sectors. The idea is to make a heat map to watch in which areas we are harvesting more fruits, or less, in each season. Which tool, software or language do you think is better for this purpose? I was looking for information, but the main tools are for geolocation. AI: I believe that the simplest is to use Seaborn. Check out the example below: import numpy as np import seaborn as sns sns.set() np.random.seed(0) uniform_data = np.random.rand(10, 12) ax = sns.heatmap(uniform_data) And the output looks as follows: This is taken from the seaborn documentation
H: Confusion in backpropagation algorithm I have been trying to understand the backpropagation for a while now. I have came across two variants of it. In the Andrew Ng class the derivatives of the weights of hidden layers are calculated using the error signal that is distributed back to the hidden node. In Geoffrey Hinton class the derivatives of the weights of hidden layers are calculated using the derivatives of the next layer that are already computed and from my knowledge of calculus that makes more sense. Can someone explain how the first variant works? AI: The first variant is the second variant, or more accurately there is only one type of backpropagation, and that works with the gradients of a loss function with respect to parameters of the network. This is not an uncommon point to have questions about though, the main issue that I see causing confusion is when the loss function has been cleverly constructed so that it works with the output layer activation function, and the derivative term is numerically $\hat{y} - y$, which looks the same as taking the linear error directly. People studying the code implementing a network like this can easily come to the conclusion that the initial gradient is in fact an initial error (and whilst these are numerically equal, they are different concepts, and in a generic neural network they don't have to be equal) This situation applies for the following network architectures: Mean squared error $\frac{1}{2N}\sum_{i=1}^N(\hat{y}_i - y_i)^2$ and linear output layer - note the multiplier $\frac{1}{2}$ is there to deliberately simplify the derivative. Binary cross-entropy $\frac{-1}{N}\sum_{i=1}^Ny_i\text{log}(\hat{y}_i) + (1-y_i)\text{log}(1-\hat{y}_i)$ and sigmoid output layer. The derivative of the loss neatly cancels out the derivative of the sigmoid, leaving you with gradient at the pre-transform stage of $\hat{y} - y$. Multi-class logloss with one-hot encoding of true classes $\frac{-1}{N}\sum_{i=1}^N\mathbf{y}_i\cdot\text{log}(\hat{\mathbf{y}}_i)$ and softmax output layer. Again the derivative of the loss neatly cancels out, leaving you with gradient at the pre-transform stage of $\hat{y} - y$ for the true class. So when you are told that backpropagation processes an "error signal" or "the error" backwards through the network, just mentally add "the gradient of" to the start of the phrase. Some people will say it knowingly as shorthand, others might be honestly confused. The same applies to deeper layers, although then there is no other source for the confused "this is the error being distributed" other than as shorthand for "this is the [gradient of the] error being distributed".
H: What is the difference between Pytorch's DataParallel and DistributedDataParallel? I am going through this imagenet example. And, in line 88, the module DistributedDataParallel is used. When I searched for the same in the docs, I haven’t found anything. However, I found the documentation for DataParallel. So, would like to know what is the difference between the DataParallel and DistributedDataParallel modules. AI: As the Distributed GPUs functionality is only a couple of days old [in the v2.0 release version of Pytorch], there is still no documentation regarding that. So, I had to go through the source code's docstrings for figuring out the difference. So, the docstring of the DistributedDataParallel module is as follows: Implements distributed data parallelism at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. The module is replicated on each machine and each device, and each such replica handles a portion of the input. During the backwards pass, gradients from each node are averaged. The batch size should be larger than the number of GPUs used locally. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples). And the docstring for the dataparallel is as follows: Implements data parallelism at the module level. This container parallelizes the application of the given module by splitting the input across the specified devices by chunking in the batch dimension. In the forward pass, the module is replicated on each device, and each replica handles a portion of the input. During the backwards pass, gradients from each replica are summed into the original module. The batch size should be larger than the number of GPUs used. It should also be an integer multiple of the number of GPUs so that each chunk is the same size (so that each GPU processes the same number of samples). This reply in the Pytorch forums was also helpful in understanding the difference between the both,
H: What is "Policy Collapse" and what are the causes? I saw the term "policy collapse" on the comments of a tutorial for reinforcement learning. I'm guessing that it's referred to as a policy collapse when the policy worsens over training due to a bad hyper-parameter, be it the learning rate, batch size, etc., but I couldn't find anything explaining it in clearly and in detail. AI: A web search for "policy collapse" "reinforcement learning" finds this question, a related one in stats.stackexchange.com and the comments section where you found the phrase. There are two other results on unrelated subjects where the words happen to appear next to each other. Then that's it - 5 results total from Google. A google books ngrams search for policy collapse finds no references at all. It is hard to prove a negative, but I think this is not a widely used term. However, the comment does appear to be referring to a real phenomenon. That is where a reinforcement agent, instead of converging on the value functions for an optimal policy as it gains experience, actually diverges (and the parameters of the approximator will diverge too). This can happen when using non-linear function approximators to estimate action-values. More generally, it tends to happen when you have the following traits in your problem: Using a function approximator, especially a non-linear one (although even linear function approximators can diverge) Using a bootstrap method, e.g. Temporal Difference (TD) Learning (including SARSA and Q-learning), where values are updated from the same value estimator applied to successive steps. Off-policy training. Attempting to learn the optimal policy whilst not behaving optimally (as in Q-Learning). In Sutton and Barto's book this is called the "deadly triad". If you do a web search for "deadly triad" "reinforcement learning" you will find many more results. It is an ongoing area of research how best to combat the effect. In the paper that introduced the DQN model learning to play Atari games, the researchers applied two things that help stabilise against the effect: Experience replay, where transitions are not learned from immediately, but put into a pool from which mini-batches are sampled to train the approximator. Bootstrap estimates are made from a "frozen" copy of the learning network, updated every N training steps - i.e. when calculating the TD target $R + \gamma \hat{q}(S', A', \theta)$, use this old copy of the network. From the comment section you linked, it appears even applying these things is not a guaranteed fix and takes some judgement. In that case it was increasing the mini-batch size for experience replay that helped to stabilise an agent playing a variant of the video game Pong.
H: How to predict user next purchase items I have an e-commerce website where customers can purchase items directly from the site. I have training data which includes order id, user id, order number, days since prior order, product id, add to cart order, reordered... I am trying to predict, for each user, what items he will purchase on his next order. I tried to use Naive Bayes, average purchase items per user and the following equation: posterior ~ Bayes Factor x prior but the prediction outcome is not good and has many false positives and/or negatives. Maybe I can try to first train on the number of items a user will purchase then train on the specific items he will get but not sure will it get better results. I think this can go in the multi label classification but has not used multi labels in classification before. I am using python with sklearn, pandas... Any better models I can use and how to train and predict variable multi labels and whether I can do it in sklearn? Keep in mind that the data is large and predicting using some of the classification algorithms in sklearn unfortunately takes huge amounts of memory so, any ideas on how to reduce memory consumption would also be useful. AI: First of all you have to realize these kind of problems have large amounts of noise compared to signal, because predicting what someone will buy based on a very small window of information is difficult. That said, you are throwing away a lot of information with your current approach. Temporal aspects include a ton of information, for example the sequence in which items were bought etcetera. While this is a lot more complicated than what you are describing now, you could look into recurrent neural networks where you feed history up to the point of prediction as a sequence and predict the item they will buy next as softmax classification. This will depend on the amount of products that you offer whether this is feasible or not. Another advantage is that so-called 'out-of-core' training is relatively easy with neural networks due to the iterative training of batches. Multi-label is also clean, you can just add a number of labels at the end of your graph if necessary.
H: Find points on a map close to given points I have a locus L of points (lat, long). And I would like to find N=10 points (let's call them warehouses) such that: $$loss = \sum_{l \in L} maximum_{w \in W}(distance(l, w))^2 $$ is minimized. Is there a documented algorithm or approach that solves this problem? Right now I am thinking Excel may be able to handle this task. However I have too much data for Excel and will need to implement this in Python / Pandas. AI: I can tell you how I would do it, but there is almost certainly a faster implementation. Assuming you start with, for each point in $L$, the distances to each warehouse, $w \in W$. These distances should be calculated by the haversine formula. You can find the distance to the $N$th closest point in $w$ by using the quickselect algorithm. This is very similar to the quicksort algorithm but only sorting the parts that you care about. The average case for quickselect is $O(N)$ but you'll need to repeat for each $l \in L$. Note that, since the square is monotonic for positive distances, you only need to minimise $$\sum_{l \in L} maximum_{w \in W}(distance(l, w)) $$ I found a handy implementation of the quickselect algorithm on KoderDojo
H: Calculating the Standard Deviation by category using Python I have a datset with Scores and Categories and I would like to calculate the Standard Deviation of these scores, per category. The data look something like this: Category Score AAAA 1 AAAA 3 AAAA 1 BBBB 1 BBBB 100 BBBB 159 CCCC -10 CCCC 9 What I would then like is the Standard Deviation of each Category. I know that with numpy I can use the following: numpy.std(a) But the example I can find only have this relating to a list and not a range of different categories in a DataFame. AI: You can easily do this using pandas: import pandas as pd import numpy as np df = pd.DataFrame([["AA", 1], ["AA", 3], ["BB", 3], ["CC", 5], ["BB", 2], ["AA", -1]]) df.columns = ["Category", "Score"] print df.groupby("Category").apply(np.std)
H: Explanation of the F beta formula The F beta formula according to the wikipedia is "The weighted harmonic mean of precision and recall". I can not understand why in the left part of equation there is beta and in the right one is beta^2: To my mind if I claim that Precision is 5 times important than Recall: F beta = (1+beta)/(beta/P+1/R)=(1+beta)PR/(beta*R+P), where beta=0.2. Is this right? AI: That's a great question, because on its face it seems like the weight should be $\beta$ alone, and, it should be in front of recall. The answer is in the text from which that reference is taken, on page 133: http://www.dcs.gla.ac.uk/Keith/pdf/Chapter7.pdf The definition is designed to make the metric indifferent to a change in precision or recall when $P/R = \beta$. That is, $F_\beta$ increases by the same amount when either precision or recall increases, at the point where precision is already $\beta$ times bigger than recall. The definition does indeed weight recall more highly as you can verify. Honestly on re-reading the text above, I was confused, because I don't see how it makes sense to think of "equilibrium" as the point where precision is much bigger, if recall matters more. I plugged in the formula to Wolfram Alpha, and: Hm. These are only equal if $R/P = \beta$! I think the paper may have misstated this then, or else I've really missed something. It's a formula whose value changes at the same rate with respect to precision or recall, when recall is already $\beta$ times larger, and in that sense it corresponds to treating recall as $\beta$ time more important.
H: what is the loss function in char recognition using Tensorflow? I have code in Tensorflow using convolution neural network to recognize the characters in street view Text (SVT) data. Since the label type is string, what should I use instead of tf.nn.sparse_softmax_cross_entropy_with_logits() in the loss function? I cannot use tf.nn.sparse_softmax_cross_entropy_with_logits() because the labels her must be an int dtype?? AI: The loss function is correct, you just need to convert categorical variables into numerical representations using one-hot vector encoding. Please take a look at this.
H: What is the difference between a hashing vectorizer and a tfidf vectorizer I'm converting a corpus of text documents into word vectors for each document. I've tried this using a TfidfVectorizer and a HashingVectorizer I understand that a HashingVectorizer does not take into consideration the IDF scores like a TfidfVectorizer does. The reason I'm still working with a HashingVectorizer is the flexibility it gives while dealing with huge datasets, as explained here and here. (My original dataset has 30 million documents) Currently, I am working with a sample of 45339 documents, so, I have the ability to work with a TfidfVectorizer also. When I use these two vectorizers on the same 45339 documents, the matrices that I get are different. hashing = HashingVectorizer() with LSM('corpus.db')) as corpus: hashing_matrix = hashing.fit_transform(corpus) print(hashing_matrix.shape) hashing matrix shape (45339, 1048576) tfidf = TfidfVectorizer() with LSM('corpus.db')) as corpus: tfidf_matrix = tfidf.fit_transform(corpus) print(tfidf_matrix.shape) tfidf matrix shape (45339, 663307) I want to understand better the differences between a HashingVectorizer and a TfidfVectorizer, and the reason why these matrices are in different sizes - particularly in the number of words/terms. AI: The main difference is that HashingVectorizer applies a hashing function to term frequency counts in each document, where TfidfVectorizer scales those term frequency counts in each document by penalising terms that appear more widely across the corpus. There’s a great summary here. Hash functions are an efficient way of mapping terms to features; it doesn’t necessarily need to be applied only to term frequencies but that’s how HashingVectorizer is employed here. Along with the 45339 documents, I suspect the feature vector is of length 1048576 because it’s the default 2^20 n_features; you could reduce this and make it less expensive to process but with an increased risk of collision, where the function maps different terms to the same feature. Depending on the use case for the word vectors, it may be possible to reduce the length of the hash feature vector (and thus complexity) significantly with acceptable loss to accuracy/effectiveness (due to increased collision). Scikit-learn has some hashing parameters that can assist, for example alternate_sign. If the hashing matrix is wider than the dictionary, it will mean that many of the column entries in the hashing matrix will be empty, and not just because a given document doesn't contain a specific term but because they're empty across the whole matrix. If it is not, it might send multiple terms to the same feature hash - this is the 'collision' we've been talking about. HashingVectorizer has a setting that works to mitigate this called alternate_sign that's on by default, described here. ‘Term frequency - inverse document frequency’ takes term frequencies in each document and weights them by penalising words that appear more frequently across the whole corpus. The intuition is that terms found situationally are more likely to be representative of a specific document’s topic. This is different to a hashing function in that it is necessary to have a full dictionary of words in the corpus in order to calculate the inverse document frequency. I expect your tf.idf matrix dimensions are 45339 documents by 663307 words in the corpus; Manning et al provide more detail and examples of calculation. ‘Mining of Massive Datasets’ by Leskovec et al has a ton of detail on both feature hashing and tf.idf, the authors made the pdf available here.
H: Summary statistics by category using Python I have a datset with Scores and Categories and I would like to calculate the summary statistics for each of these categories. The data look something like this: Category Score AAAA 1 AAAA 3 AAAA 1 BBBB 1 BBBB 100 BBBB 159 CCCC -10 CCCC 9 What I would then like would be something like this Category Count Mean Std Min 25% 50% 75% Max AAAA AAAA AAAA BBBB BBBB BBBB CCCC CCCC I have been looking at using pandas with a combination of both .groupby() and .describe() like this df.groupby('Category')['Score'].describe() and this almost looks like what I want but when I come to view this as a Dataset, all of the stats are in the index. I would like the data to be in the form of a table so I can output it and create a visualization off of the back of it. Any ideas? Thanks AI: IIUC: In [80]: df.groupby("Category")['Score'].describe().reset_index() Out[80]: Category count mean std min 25% 50% 75% max 0 AAAA 3.0 1.666667 1.154701 1.0 1.00 1.0 2.00 3.0 1 BBBB 3.0 86.666667 79.839422 1.0 50.50 100.0 129.50 159.0 2 CCCC 2.0 -0.500000 13.435029 -10.0 -5.25 -0.5 4.25 9.0
H: Neural Network Learning Rate vs Q-Learning Learning Rate I'm just getting into machine learning--mostly Reinforcement Learning--using a neural network trained on Q-values. However, in looking at the hyper-parameters, there are two that seem redundant: the learning rate for the neural network, $\eta$, and the learning rate for Q-learning, $\alpha$. They both seem to change the rate at which the neural net takes new conclusions over old ones. So are these two parameters redundant? Do I need to worry about even having $\alpha$ as anything other than 1 if I'm already tuning $\eta$, or do they have ultimately different effects? AI: There is usually only one learning rate active when using a neural network as function approximator in reinforcement learning. The different names $\eta$ and $\alpha$ are just different conventions for the same basic concept. When you use a function approximator, other than a linear one, in reinforcement learning, then typically you would not use TD error based update like this: $$\mathbf{w} \leftarrow \mathbf{w} + \alpha[R+\gamma\hat{q}(S', A',\mathbf{w}) - \hat{q}(S, A,\mathbf{w})]\nabla \hat{q}(S, A,\mathbf{w})$$ But you would train your estimator in a supervised learning manner on sampled TD target (which would use $\eta$ param in a neural network): $$\mathbf{x} = \phi(S, A), y = R+\gamma\hat{q}(S', A',\mathbf{w})$$ You can actually do either if your library supports it - it is certainly possible to calculate $\nabla \hat{q}(S, A,\mathbf{w})$ for a neural network for instance, instead of using an explicit training loss function. However, the two approaches are equivalent ways of expressing the same thing, there is no reason to use both. There are other parameters used in reinforcement learning that may affect rates of convergence and other properties of learning agents. For example with differential semi-gradient TD learning - which might be an algorithm you would look at for a continuous task - Sutton and Barto present $\beta$ as a separate learning rate for the average reward, distinct from the learning rate of the estimator. So are these two parameters redundant? Do I need to worry about even having $\alpha$ as anything other than 1 if I'm already tuning $eta$, or do they have ultimately different effects? They are essentially the same parameter with a different name. If you are trying to pass an error value like $R+\gamma\hat{q}(S', A',\mathbf{w}) - \hat{q}(S, A,\mathbf{w})$ (whether multiplied by $\alpha$ or not) into the neural network as a target, then you have got the wrong idea. Instead you want your neural network to learn the target $R+\gamma\hat{q}(S', A',\mathbf{w})$. That's because the subtraction of current prediction and multiplication by a learning rate is built into the neural network training.
H: Combine two sets of clusters I have two sets of topics obtained from two different sets of news paper articles. In other words, Cluster_1 = ${x_1, x_2, ..., x_n}$ includes the main topics of 'X' news paper set and Cluster_2 = ${y_1, y_2, ..., y_n}$ includes the main topics of 'Y' news paper set. Now I want to find clusters in the two sets that are similar/related by considering the cluster attributes as given in the example below. Example 1, **X1 in Cluster_1** is mostly similar/related to **Y2 in Cluster_2** **X2 in Cluster_1** is mostly similar/related to **Yn in cluster_2** and so on. Example 2: News about Yet in Cluster_1 is mostly similar/related to News about Science in Cluster_2 News about Floods in Cluster_1 is mostly similar/related to News about Rains in Cluster_2 Since, I am dealing with two separate sets of clusters, what would be a suitable measurement/method I can use to connect the clusters in the two different sets? AI: To compare two LDA topics, you're really trying to compute the distance between two probability distributions. One such measure that's commonly used in these circumstances is the Hellinger Distance. To find the closest match for $x_1$ in the topics for $y$, you would calulate the Hellinger Distance between $x_1$ and each $y$ topic, then take the lowest one. Keep in mind that there's no guarantee whatsoever that the "most similar" topic in this sense would be remotely, subjectively similar.
H: Why are Machine Learning models called black boxes? I was reading this blog post titled: The Financial World Wants to Open AI’s Black Boxes, where the author repeatedly refer to ML models as "black boxes". A similar terminology has been used at several places when referring to ML models. Why is it so? It is not like the ML engineers don't know what goes on inside a neural net. Every layer is selected by the ML engineer knowing what activation function to use, what that type of layer does, how the error is back propagated, etc. AI: The black box thing has nothing to do with the level of expertise of the audience (as long as the audience is human), but with the explainability of the function modelled by the machine learning algorithm. In logistic regression, there is a very simple relationship between inputs and outputs. You can sometimes understand why a certain sample was incorrectly catalogued (e.g. because the value of certain component of the input vector was too low). The same applies to decision trees: you can follow the logic applied by the tree and understand why a certain element was assigned to one class or the other. However, deep neural networks are the paradigmatic example of black box algorithms. No one, not even the most expert person in the world grasp the function that is actually modeled by training a neural network. An insight about this can be provided by adversarial examples: some slight (and unnoticeable by a human) change in a training sample can lead the network to think that it belongs to a totally different label. There are some techniques to create adversarial examples, and some techniques to improve robustness against them. But given that no one actually knows all the relevant properties of the function being modeled by the network, it is always possible to find a novel way to create them. Humans are also black boxes and we are also sensible to adversarial examples.
H: The effect of all zero value as the input of SVM I am running a set of input parameters in SVM. One if the input contains all zero values. I know that this kind input should be omitted, but I don't know the reason why. Can anyone help me? Thanks! AI: Your question is not clear, but I'm assuming you're trying to say one of the features you used for SVM has values containing zeros, and how does this affect SVM in terms of making a decision. Think about it in a Euclidean hyperplane(three dimensions for better visualization), if the input contains all zeros where will you plot a point related to that feature on the hyperplane. The value of that point will be zero, so effectively you're not using that feature at all.
H: What is the different between Fine-tuning and Transfer-learning? Usually the neural network training has at least 2 steps: first trained on a large set of some standard data (ImageNet, ...) and then the resulting weights are trained on a small set of my data (in this step we can train all layers or only one last layer) What is the same of 2-nd step, is it Fine-tuning or Transfer-learning? And what is the different between Fine-tuning and Transfer-learning? AI: Generally, I would refer to this as transfer learning or network adaptation. That is, taking a network that has learned useful features from one domain and adapting that network and its developed features to another domain. That said, there appear to be many sources that closely conflate fine tuning with transfer learning. Therefore, I would say the difference in terminology is primarily opinion-based and suggest closure of this question on those grounds.
H: Can HDF5 be reliably written to and read from simultaneously by separate python processes? I'm writing a script to record live data over time into a single HDF5 file which includes my whole dataset for this project. I'm working with Python 3.6 and decided to create a command line tool using click to gather the data. My concern is what will happen if the data gathering script is writing to the HDF5 file and the yet-to-be ML application tries to read data from the same file? I took a look at The HDF Group's documentation about HDF5 parallel I/O, but that didn't really clear things up for me. AI: HDF5 parallel I/O will not solve this problem. That technology is primarily intended for performance, not for collision avoidance. What you want is know as SWMR (single-writer/multiple-reader): Data acquisition and computer modeling systems often need to analyze and visualize data while it is being written. It is not unusual, for example, for an application to produce results in the middle of a run that suggest some basic parameters be changed, sensors be adjusted, or the run be scrapped entirely. To enable users to check on such systems, we have been developing a concurrent read/write file access pattern we call SWMR (pronounced swimmer). SWMR is short for single-writer/multiple-reader. SWMR functionality allows a writer process to add data to a file while multiple reader processes read from the file. SWMR was first included in HDF5 version 1.10.0 released on 2016-03-30 Concurrent Access to HDF5 Files - Single Writer/ Multple Reader (SWMR) The Single Writer/ Multiple Reader or SWMR feature enables users to read data concurrently while writing it. Communications between the processes and file locking are not required. The processes can run on the same or on different platforms as long as they share a common file system that is POSIX compliant.
H: When is centering and scaling needed before doing hierarchical clustering? I am working on a clustering project where we have collected protein data from over 100 patients samples. This data is normalized and log transformed. The goal is to cluster samples based upon their similarities, I am using hierarchal clustering and trying out combinations of distance metrics and clustering algorithms. (We haven't made a decision on distance method or clustering algorithms) My question is related to the centering and scaling, Is it absolutely necessary to both scale and center the data?, even in scenarios where all the data is coming from the same platform and with same units of measurement. Appreciate your input on this one. Thanks AI: My question is related to the centering and scaling, Is it absolutely necessary to both scale and center the data?, even in scenarios where all the data is coming from the same platform and with same units of measurement. It depends on the type of data you have. For some types of well defined data, there may be no need to scale and center. A good example is geolocation data (longitudes and latitudes). If you were seeking to cluster towns, you wouldn't need to scale and center their locations. For data that is of different physical measurements or units, its probably a good idea to scale and center. For example, when clustering vehicles, the data may contain attributes such as number of wheels, number of doors, miles per gallon, horsepower etc. In this case it may be a better idea to scale and center since you are unsure of the relationship between each attribute. The intuition behind that is that since many clustering algorithms require some definition of distance, if you do not scale and center your data, you may give attributes which have larger magnitudes more importance. In the context of your problem, I would scale and center the data if it contains attributes like patient height, weight, age etc. This answer on a similar question has more.
H: Reinforcement Learning algorithm for Optimized Trade Execution My question deals with the algorithm described in the paper: Reinforcement Learning for Optimized Trade Execution This paper uses reinforcement learning technique to deal with the problem of optimized trade execution. They divide the data into episodes, and then apply (on page 4 in the link) the following update rule (to the cost function) and algorithm to find an optimal policy: (T is the total time units, I is the volume, L is the possible number of actions, x represents the state, and c represents the cost function, and c_im is the immediate reward at a certain state and a certain action. n is the number of times that the state-action pair were visited) Here are my questions: If I understand correctly, the algorithm is basically a dynamic programming, when we move backwards in time. Why do we need n in the cost function update rule. Aren't we visiting each state exactly once? If I understand correctly, we should run this algorithm on every episode (in the experiment in the paper they had 45000 episodes). In such case, how do we combine the results from all the episodes? That is, each episode provides an optimal policy. How do we combine all these policies to one final policy? AI: Why do we need n in the cost function update rule. Aren't we visiting each state exactly once? The update is assuming a static distribution and estimating the average value. As each estimate is made available, it is weighted less of the total each time. The formula means that the first sample is weighted $1$, second $\frac{1}{2}$, third $\frac{1}{3}$ which is what you need to get the mean value when you apply the changes due to the samples serially whilst maintaining the best estimate of the mean at each step. This is a little odd in my experience of RL, because it assumes the bootstrap values (the max over next step) come from a final distribution to weight everything equally like this. But I think it is OK due to working back from final step, hence each bootstrap value should be fully estimated before going backwards to previous time step. If I understand correctly, we should run this algorithm on every episode (in the experiment in the paper they had 45000 episodes) This looks like an algorithm that you run on the whole data set, where each episode is the same length $T$. So you run each timestep (starting with the end time step and working backwards since the ultimate reward is established at the end of the episode, so this is more efficient), and sample from every episode at that timestep in the While (not end of data) loop. The values are therefore combined inside the loop at that stage, and there is no need to add anything to the algorithm to combine episodes.
H: Are there databases specializing in scientific data I am currently comparing file formats (HDF5 , etc.) to DBMS systems for a scientific data repository. I know of proprietary solutions such as Oracle extensions, but are there open source/free systems for scientific data? I would define that as a system that would have the MKSA unit system integrated, and an extensive library of scientific conversions/operations. AI: SciDB is an open-source DBMS for scientific data.
H: how can I solve label shape problem in tensorflow when using one-hot encoding? I used tensorflow to recognize text from natural images by using convolutional neural network; there is no specific number of characters in the text. To make a successful training I should convert the categorical labels into binary using one-hot encoding. So, for each label, I used integer encoding for each character and stored them in one numpy array in order to create TFRecords. For example: alphabet = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ ' TrainLabel = ["CNN in Tensorflow"] # define a mapping of chars to integers char_to_int = dict((c, i) for i, c in enumerate(alphabet)) integer_encoded = [char_to_int[char] for char in TrainLabel[0]] if (len(TrainLabel[0])) < 51: for j in xrange(51- (len(TrainLabel[0]))): integer_encoded.append(52) # one hot encode onehot_encoded = [] for value in integer_encoded: letter = [0 for _ in range(len(alphabet))] letter[value] = 1 onehot_encoded.append(letter) label = np.array(onehot_encoded, np.float32) 51 is the maximum number of character in the text, so if the text has less than 51 characters, pad it to 51 characters with spaces. If we print the label, it will be like this:: array([[ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], [ 0., 0., 0., ..., 0., 0., 0.], ..., [ 0., 0., 0., ..., 0., 0., 1.], [ 0., 0., 0., ..., 0., 0., 1.], [ 0., 0., 0., ..., 0., 0., 1.]], dtype=float32) after creating the batch queue, the label has shape [batch_size, 2703]. 2703 is come from 51*53 which 53 is the number of classes My problem is in loss function:: the label shape in tf.nn.sparse_softmax_cross_entropy_with_logits() must be [batch_size], but the label that I used here has this shape [batch_size, 53] because I used one-hot encoding? How can I deal with that?? This is the problem:: (labels_static_shape.ndims, logits.get_shape().ndims)) ValueError: Rank mismatch: Rank of labels (received 2) should equal rank of logits minus 1 (received 2). AI: Since your data are already in a one-hot encoding, you can use tf.nn.softmax_cross_entropy_with_logits(), which expects an input of shape [batch_size, num_classes] for the labels. (The tf.nn.sparse_softmax_cross_entropy_with_logits() op expects the labels as a batch of integers, where each integer corresponds to the class ID for each example.)
H: Feature selection on n different values I have a .csv file with data in the following form: moment_1;moment_2;moment_3;force_x;force_y;force_z;... -0,02131267;-1,6032766088;5,9906811787;5,40010285;0,0203;86,44227467;... 2599;-1,70091039344;-1,3044809;-0,0406673590;-2,60896180797;43,2334;... The file is very large and I need to put it in an interactive visualization, that's why I need to reduce the data points without changing the overall structure too much. Many data points are very close to each other as seen in the following image: My approach was to define a threshold and filter all points which have a distance to the previous point lower than the threshold. But I think that's not an optimal solution because, when I remove one index, I need to remove it from the other data array too, otherwise the structure is changed. Are there better approaches? AI: Instead of filtering single points I would suggest that you smooth your data using established techniques, e.g. Savitzky–Golay filter. Another option would be to employ Kernel Density Estimation, where you can then visualize the curves using a reduced, regular set of supporting points.
H: Python implementation of cost function in logistic regression: why dot multiplication in one expression but element-wise multiplication in another I have a very basic question which relates to Python, numpy and multiplication of matrices in the setting of logistic regression. First, let me apologise for not using math notation. I am confused about the use of matrix dot multiplication versus element wise pultiplication. The cost function is given by: $J = - {1\over m} \sum_{i=1}^m y^{(i)}log(a^{(i)})+(1 - y^{(i)})log(1-a^{(i)})$ And in python I have written this as cost = -1/m * np.sum(Y * np.log(A) + (1-Y) * (np.log(1-A))) But for example this expression (the first one - the derivative of J with respect to w) ${\partial J \over{\partial w}} = {1 \over{m}} X(A-Y)^T$ ${\partial J\over{\partial b}} = {1\over{m}} \sum \limits_{i = 1}^m (a^{(i)}-y^{(i)})$ is dw = 1/m * np.dot(X, dz.T) I don't understand why it is correct to use dot multiplication in the above, but use element wise multiplication in the cost function i.e why not: cost = -1/m * np.sum(np.dot(Y,np.log(A)) + np.dot(1-Y, np.log(1-A))) I fully get that this is not elaborately explained but I am guessing that the question is so simple that anyone with even basic logistic regression experience will understand my problem. AI: In this case, the two math formulae show you the correct type of multiplication: $y_i$ and $\text{log}(a_i)$ in the cost function are scalar values. Composing the scalar values into a given sum over each example does not change this, and you never combine one example's values with another in this sum. So each element of $y$ only interacts with its matching element in $a$, which is basically the definition of element-wise. The terms in the gradient calculation are matrices, and if you see two matrices $A$ and $B$ multiplied using notation like $C = AB$, then you can write this out as a more complex sum: $C_{ik} = \sum_j A_{ij}B_{jk}$. It is this inner sum across multiple terms that np.dot is performing. In part your confusion stems from the vectorisation that has been applied to equations in the course materials, which are looking forward to more complex scenarios. You could in fact use cost = -1/m * np.sum( np.multiply(np.log(A), Y) + np.multiply(np.log(1-A), (1-Y))) or cost = -1/m * np.sum( np.dot(np.log(A), Y.T) + np.dot(np.log(1-A), (1-Y.T))) whilst Y and A have shape (m,1) and it should give the same result. NB the np.sum is just flattening a single value in that, so you could drop it and instead have [0,0] on the end. However, this does not generalize to other output shapes (m,n_outputs) so the course does not use it.
H: Converting a string to dummy encoded variables Here's the data PlayerID, Characters, Win or Lose I can make it look like this 8PYPY0LLQ,valkyrie5 , chr_witch4 , hog_rider5 , zapMachine1 , mega_minion3 , baby_dragon2 , bomber7 , skeleton_horde1, 0 Or like this 2GRG822L9,"barbarians8, valkyrie5, chr_balloon3, fire_spirits8, minion8, firespirit_hut6, rage4, skeleton_horde3,",1 The second column is an 8 character combination from 70+ n characters. I need to encode the variables to be dummy variables, so each character gets its own column. Is there a way to do this in python/R? I'm assuming you have to leave the second column as a string rather than outputting a csv file that looks like this. 2GRG822L9,barbarians8, valkyrie5, chr_balloon3, fire_spirits8, minion8, firespirit_hut6, rage4, skeleton_horde3,1 8PYPY0LLQ,valkyrie5 , chr_witch4 , hog_rider5 , zapMachine1 , mega_minion3 , baby_dragon2 , bomber7 , skeleton_horde1,0 It should probably look like this before dummy encoding (I can get rid of commas within the string) 2GRG822L9,"barbarians8, valkyrie5, chr_balloon3, fire_spirits8, minion8, firespirit_hut6, rage4, skeleton_horde3,",1 8PYPY0LLQ,"valkyrie5 , chr_witch4 , hog_rider5 , zapMachine1 , mega_minion3 , baby_dragon2 , bomber7 , skeleton_horde1,",0 AI: It is simple with python: from pandas import DataFrame data = [('2GRG822L9',"barbarians8,valkyrie5,chr_balloon3,fire_spirits8,minion8,firespirit_hut6,rage4,skeleton_horde3",1), ('8PYPY0LLQ',"valkyrie5,chr_witch4,hog_rider5,zapMachine1,mega_minion3,baby_dragon2,bomber7,skeleton_horde1",0)] df = DataFrame.from_records(data,columns=('PlayerID', 'Characters', 'Result')) df = df.drop('Characters', 1).join(df.Characters.str.get_dummies(',')) The result: PlayerID Result baby_dragon2 barbarians8 bomber7 chr_balloon3 \ 0 2GRG822L9 1 0 1 0 1 1 8PYPY0LLQ 0 1 0 1 0 chr_witch4 fire_spirits8 firespirit_hut6 hog_rider5 mega_minion3 \ 0 0 1 1 0 0 1 1 0 0 1 1 minion8 rage4 skeleton_horde1 skeleton_horde3 valkyrie5 zapMachine1 0 1 1 0 1 1 0 1 0 0 1 0 1 1
H: Install Spark and Hadoop in the same machine I'm trying to install Spark Apache and Hadoop in the same machine. Spark will be used to process data, and HDFS from Hadoop will be used to store data. I first started by installing spark and it worked perfectly. But, when I started installing Hadoop and set the JAVA_HOME environment variable HDFS worked, but spark break down and shows: Files was unexpected at this time. when I launched it. When I deleted JAVA_HOME, Spark worked again but it's not the case for HDFS. What should I do in this case? AI: The Hadoop stack is difficult to setup and people complain that you can't trust any answers to problems over 6-12 months old. I would recommend getting a pre-configured Hadoop/Spark setup from Cloudera or HortonWorks. Both have free community editions.
H: Is there any measure to find how much classifiers are confident? Assume that we have two classifiers (C1, C2) and two classes (A, B). These classifiers give us the belonging probability for each class for each instance. Suppose that we have an instance X which is actually should be classified as A. C1 classification result is (1, 0) and C2 classification result is (0.9, 0.1) which means they both classified X correctly as A. Obviously C1 is more confident. Is there any measure that I can use to compare my classifiers based on that? AI: There are many measures which implicitly take into account the confidence of a prediction. One very common one is Log Loss (also called Cross Entropy). $-log \space P(y_t|y_p) = -(y_t \space log(y_p) + (1-y_t)\space log(1-y_p))$ Using this metric, confident correct classifications are rewarded more than relatively less confident correct classifications, and confident misclassifications are heavily punished. Any proper scoring rule meets this criteria when applied to a classification problem. Some others examples are: Surprisal Brier score (as mentioned by darXider in his comment) Spherical Scoring Rule Logarithmic Scoring Rule And an infinite number of other, related functions.
H: Advanced Activation Layers in Deep Neural Networks I'm wondering about the benefits of advanced activation layers such as LeakyReLU, Parametric ReLU, and Exponential Linear Unit (ELU). What are the differences between them and how do they benefit training? AI: ReLU Simply rectifies the input, meaning positive inputs are retained but negatives give an output of zero. (Hahnloser et al. 2010) $$ f(x) = max(0,x) $$ Pros: Eliminates the vanishing/exploding gradient problem. (true for all following as well) Sparse activation. (true for all following as well) Noise-robust deactivation state (i.e. does not attempt to encode the degree of absence). Cons: Dying ReLU problem (many neurons end up in a state where they are inactive for most or all inputs). Not differentiable. (true for all following as well) No negative values means mean unit activation is often far from zero. This slows down learning. Leaky ReLUs Adds a small coefficient ($<1$) for negative values. (Maas, Hannun, & Ng 2013) $$ f(x) = \begin{cases} x & \text{if } x \geq 0 \\ 0.1 x & \text{otherwise} \end{cases} $$ Pros: Alleviates dying ReLU problem. (true for all following) Negative activations push mean unit activation closer to zero and thus speeds up learning. (true for all following) Cons: Deactivation state is not noise-robust (i.e. noise in deactivation results in different levels of absence). PReLUs Just like Leaky ReLUs but with a learnable coefficient. (Note that in the below equation a different $a$ can be learned for different channels.) (He et al. 2015) $$ f(x) = \begin{cases} x & \text{if } x \geq 0 \\ a x & \text{otherwise} \end{cases} $$ Pros: Improved performance (lower error rate on benchmark tasks) compared to Leaky ReLUs. Cons: Deactivation state is not noise-robust (i.e. noise in deactivation results in different levels of absence). ELUs $$ f(x) = \begin{cases} x & \text{if } x \geq 0 \\ \alpha(exp(x)-1) & \text{otherwise} \end{cases} $$ Replaces the small linear gradient of Leaky ReLUs and PReLUs with a vanishing gradient. (Clevert, Unterthiner, Hochreiter 2016) Pros: Improved performance (lower error and faster learning) compared to ReLUs. Deactivation state is noise-robust.
H: In Python, why subsetting with or without square bracket is different? Suppose I have a data frame called quoteDF quotesDF volume shares 2017-01-03 2934300 100 2017-01-04 3381400 120 2017-01-05 2682300 140 2017-01-06 2945500 160 2017-01-09 3189900 180 2017-01-10 4118700 200 If I do, > quotesDF.loc[1, 'shares'] 120 > quotesDF.loc[1, ['shares']] shares 120 Name: 1, dtype: object Why the first one retunrs 120, the second one retures shares 120? In my mind, they are the same thing, except I put the second one in a vector. But, the first one is a vector stands by itself. It's just that I didn't put the square bracket on it. Why Python give me such a confusing time? AI: Assuming, you have a pandas dataframe, .loc is strictly label based. Since you're using [] it accesses the column you're specifying inside the brackets and that is the reason you're getting shares 120. Read documentation for better explanation. Here is another link that has answers similar to your question.
H: Displaying date in SAS I am trying to display a date in my result after running the below program in SAS. It runs properly but in the sas data table under DOB column I don't get anything except a period . Below is my code what am I doing wrong? data sample; Input ID name $ Dob; Format DOB mmddyy10. ; datalines; 1 abc 22jan1996 2 xyz 25aug1996 ; run; Proc print data = sample; run; Also I would like to know what does a period means at the end of this line before the semicolon: Format DOB mmddyy10. ; AI: Try the following: data sample; Input ID name $ Dob; informat DOB date9. ; format DOB mmddyy10. ; datalines; 1 abc 22jan1996 2 xyz 25aug1996 ; run; Proc print data = sample; run; I have added the informat line. The problem with the original code is that you have not told SAS what is the format that DOB will come in. In this case: 22jan1996 is of date9. format, so I added the informat telling SAS that the data will come in this way. The format DOB line tells SAS to display the data as mmddyy10. which makes 22jan1996 look like 01/22/1996. Lastly, the . before the ; in the format/informat lines specifies that it is a format type. There are character formats: $8. or numeric formats Best12. etc...
H: What ML/DL approach better suits this problem? We have a huge dataset with us that looks like below Factor -|- ... -|- Rank1 -|- Rank2 -|- Calls A11-----|- ... -|-0.1234--|-3.2345--|- Cat A A11-----|- ... -|-1.1234--|-0.2345--|- Cat B A12-----|- ... -|-2.1234--|-3.2345--|- Cat C A12-----|- ... -|-2.1234--|-3.2345--|- Cat C ... A13-----|- ... -|-0.1234--|-3.2345--|- Cat A A13-----|- ... -|-3.1234--|-0.1345--|- Cat B A13-----|- ... -|-2.1234--|-2.2345--|- Cat C A14-----|- ... -|-4.1234--|-4.2345--|- Cat C and we have about 10 million of such data points. We also have a test set with about .2 million data points where we need to accurately call them out into different classes. At this point, we are trying a mix of K-means & Random Forest approach coded in (python2.7-sklearn) which gives us about 90% accuracy at classification but we wanna reach for more. I am interested to apply some kind of deep learning approach to this and that's why I'm learning about TensorFlow. But every link I've gone though in deep learning talks (tf documentation + youtube videos) about CNN and image recognition (MNIST .. etc) only and I'm not finding any idea how to start solving this. I'm looking for your suggestions and guidance on how to approach this kind of problem / What neurons to stack or How to build models for this kind of data? I'm willing to burn my share of the midnight oil on any links and suggestions offered on this but I need to know if I'm even thinking right by trying to solve it with DL or if there's any other approach for this kind of data which may work better, or to solve this what should I study or learn? Edit 1: So, I managed to go through the TensorFlow documentation for MNIST an perhaps I can see some correlation. In the sense if I pass the rank1 & rank2 as input np.array for the 2 neuron input layer and each call as a one_hot np.array, where Cat A = [1,0,0],Cat B = [0,1,0],Cat C = [0,0,1] and put 3 neuron on the outermost layer each predicting one Cat(category) through a softmax function, will that work? Even if so, what should I use as internal/hidden layers? Can I use the other factors as well in the input layers? Do I need to convert them into numeric(int/float)? Should I pass these rank 1 and rank 2 values to the input layer just as it is or should I engineer them anyway beforehand? AI: Yes, you can use DL network to solve this problem. It is a easy multi-classes classification task. And just a full connected network will do this work. To construct the network I recommend using Keras which is easy to use. And before training the network, it is better to preprocess the data(standardizing the numeric column, and embedding the string column). EDIT: For the data normalisation you can reference this blog: Neural Network Data Normalization and Encoding, that show the basic methods and codes.
H: How can I get semantic word embneddings for compound terms? I need to build semantic word embeddings representation of compound terms like "electronic engineer" or "microsoft excel". One approach would be to use a standard pretrained model an average the words but, since I have a corpus of my domain, is there a possible better approach? To be more precise: The data I have is a corpus of millions of documents. Each document is ~ half a page and contains these compound terms. However there may be compound terms not included in the corpus. Thanks AI: If you want an exact answer, please provide a precise question i.e. define what data you have, and what you exactly wants. This said, in a general manner, you need a dataset of texts that contain these compound terms. How to treat compound terms is a whole scientific field in itself, but since you're talking about semantic word embeddings, I suggest you take a look at the article Distributed Representations of Words and Phrases and their Compositionality. The same guys who introduced word2vec describe here a simple method to go from word representation to phrase representation, giving btw a way to merge compound terms in single terms. The words "microsoft excel" become "microsoft_excel" and get their own unique embedding. If you want a python implementation for that, take a look at the gensim.models.phrase class. This does the same work as presented in the previous article.
H: Transformation of categorical variables I have a data with continous variables and categorical variables. I am using Random Forest and have made my continues variables Gaussian by transformation and have standardized it. Should categorical variables be done the same? AI: Afaik, once you deal with the categorical variables you end up having several columns where the values are either true/false (or 0/1). So, I do not see how making them Gaussian would help.
H: Understanding Logistic Regression Cost function Linear Regression cost function: $$J(\theta) = \frac{1}{2 m} \sum_{i=1}^m (h_{\theta}(x^{(i)}) - y^{(i)})^2$$ where: $$h_{\theta}(x) = \theta_0 + \theta_1 x_1$$ Logistic Regression cost function $$J(\theta) = \frac{1}{m} \sum_{i=1}^m - (y^{(i)} \times log(h_{\theta}(x{(i)})) + (1-y{(i)}) \times log(1 - h_{\theta}(x{(i)})))$$ where: $$h_{\theta}(x) = g(\theta_0 + \theta_1 x_1 + \theta_2 x_2)$$ Intuitively linear regression is easy to understand as it optimizes the average squared distance between hypothesis and training data. But in case of logistic regression, I failed to understand the cost function. What does the logistic regression cost function represent? AI: The way my intuition works for logistic regression is simple, say you are trying to classify if you have a picture with a dog on it, if that is the case, you output 1 (true), if not, you output 0 (false) The cost function essentially represents the diff between what your model output (imagine, you have a dog in a picture, and your model outputs 0.75 instead of 1, your cost is 0.25). Now, this is not a very mathematical way to put it, but at least it has helped me a lot to understand logistic regression.
H: Logic in sentence : tree representation I have sentences telling me to who a shop is opened to: "cats, dogs or birds" (1) "young dogs with collar" (2) "old cats or yellow birds" (3) etc... I would like to design an algorithm that will change this sentences to a tree representation of the logic in it: (1) = (cat) or ((dog) or (bird)) (2) = (young) and (dog) and (collar) (3) = ((cat) and (old)) or ((bird) and (yellow)) What do you think will work the best? LSTM maybe? How can I have this representation as a result? AI: Syntaxnet parser could surely help you in parsing the sentences and in tree represenation. If you plan on solving using RNNs, I believe Tree LSTM will be a better choice than LSTM, as it also preserves dependency information. Full paper. Use Tree LSTM, if you need a vector embedding for the whole sentence. For use cases like classification, sentiment analysis. It works and there is a good probability that the vector could have all the information but you may not be certain and thats why its still a black box. But, if your use case is a clear representation of tree structure and logic among the terms (which you wanted), better go with parsers like Syntaxnet and try rule-based models for the use case mentioned.
H: Predict Two Variables vs Predict One Variable with Two Models Is there any difference if we predict two variables vs predict one variable with two models (so we have two variables predicted)? AI: I think OP meant a multi-class model that predicts an outcome variable with multiple classes versus building multiple separate binary classification models for each class. Indeed these two modeling techniques are different, and should be used differently according to the problem. Multi-class Classification Problems These are problems where you have to assign cases to a dependent variable with multiple categories/classes/outcome. Classes are mutually exclusive, meaning that observations can be assigned to only one category at a time (estimated probability for every category of $Y$ adds up to one). Many algorithms exists for this type of problem. For example Random Forest, Multinomial Logistic Regression, Boosted Trees, Linear Discriminant Analysis, etc. Each with their own set of assumptions. In the muli-class case, multinomial logisitc regression actually picks a "pivot" class and runs a binary logistic regression for each class regressing on the pivot class. Multi-label Classification Problems These are problems where each case can be assigned to more than one category. The dependent variable is still multi-class, but we cannot use a multi-class classification model because the categories are not mutually exclusive. There exist several ways to deal with multi-label problems, one of which is to transform a multi-label problem to multiple distinct binary classification problems. This is known as a binary relevance transformation. This can be done by simply treating each class of the dependent variable as a binary outcome (in that specific class or not), and running a binary classification method (like binary logistic regression) on each of them. This is similar to the multinomial logsitic regression case because multiple binary logistic regressions are run, but it is also different because each model in this case is independent and do not depend on a chosen pivot class. Multi-class or Multi-label? To answer your question: "Multi-class to predict a multi-category dependent or several binary models to predict each class?". It really depends on your problem. Do you want to assign multiple classes to each observation? If so, you have a multi-label problem and the binary relevance transformation is a good way to model. Are categories in your dependent variable instead mutually exclusive? In this case, use multi-class classification models. Note: The downside of binary relevance is that it treats each category of the dependent variable to be independent, thus ignoring any dependencies across different classes. A label powerset transformation is a good alternative. This transforms the dependent variable into a multi-class variable with each class representing the occurrence of a combination of the original classes. A multi-class model is then run on these new combination classes. This takes into account the co-occurrence of classes not just single occurrences.
H: Criterion for Firing a perceptron The criterion for firing a perceptron is as follows Why is it that when the function $w \cdot x + b = 0$ the output is zero as well. Why couldn't it have been set to 1? If one were to simulate the behavior of the perceptron with a sigmoid function, $w \cdot x + b$ can be multiplied by any arbitrary constant $C>0$ (as $C$ tends to infinity) as long as $w \cdot x+b$ is not equal to 0. This is because at 0 condition, the firing always gives output as 1 instead of zero it should have giving. However, changing the firing rule overcomes this problem. Then, is there any reason why 0 should be the output when $w \cdot x+b= 0$? AI: The perceptron algorithm was invented in 1957 at the Cornell Aeronautical Laboratory by Frank Rosenblatt (Wikipedia EN) It was the first neural network like architecture. It was thought that (a combination of) perceptrons could learn anything. There was a lot of controversy about this and after a while Marvin Minsky showed that a single perceptron could not solve the XOR problem. This was the end of the first neural network hype. After a couple of decades the multilayer perceptron was invented, and with it the backpropagation algorithm. The backpropagation algorithm can learn weights that are not directly connected to the output unit. However, it needs a differentiable activation function. Hence, the ramp activation function was not sufficient anymore and others like the sigmoid and tanh were used for this. Besides, the perceptron was first implemented using electronic boards and not with the software available to us today. Binary thresholds made this complicated process a lot easier. So, in short Rosenblatt could have chosen for any version of the heaviside step function but he chose for the one where $w \cdot x + b = 0$ leads to a zero output. If you think binary values this makes sense, a zero input (i.e. False) leads to a zero output.
H: Adding hand-crafted features to a convolutional neural network (CNN) in TensorFlow Let's say I want to add a few hand-crafted features to a convolutional neural network CNN in TensorFlow. The CNN can be a simple one as described here. Naturally I'd like to add these features right after the second pooling and right before the first fully-connected layer (FC1 in the example). Is that easy to express my method in code? I'd have to append my features to the h_pool2_flat vector/tensor: h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) AI: I figured it out. If we denote the additional features as x_feat, I changed the lines from h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1) to h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64]) h_pool2_flat = tf.concat( [h_pool2_flat, x_feat ], 1 ) h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
H: Is it possible to use grayscale images to existing model? From tensorflow's object recognition (R-CNN) I'm re-training the existing model with new categories: the types of clothes (jeans, pants, blouse, and so on). Since we don't need colors to determine the type of clothes that user is wearing, I want to re-train it with gray-scale images. Is it possible to use gray-scale images to train existing model (which are trained with color images)? I'm concerned because they trained their model with color images. Does the model just consider the grayscale image as color image? And does it still work? :) p.s I'm generating XML and csv files to put data for training and testing. AI: It depends on the model. You'll have to dig into the model definition and see. It's not uncommon for models to grayscale images in pre-processing. In that case, you will be fine. However, many recent deep convolutional models operate on RGB images as 3D tensors (4D with the batch dimension). In this case, you should consider modifying the model so that it operates on a single color channel before training it. Multi-channel convolution is discussed in the convent's chapter of Goodfellow's Deep Learning.
H: Why changing one element of a vector will change all variables? Today I encountered this strange behavior in Python doing data manipulation. Why changing a will affect b below: >>> a = ['Hello', 1, 2] >>> b = a >>> a[0] = 5 >>> b [5, 1, 2] I only asked a to change, why is b changing? But the following is fine, >>> a = 3 >>> b = 4 >>> a 3 >>> b 4 My guess is that I am doing passing by reference(?). But in both cases are passing by reference, what is going on here? AI: In your first case, list what you're doing is copying a list, which is referencing a list to another list (ex: list b is referenced by list a). In order to avoid that, you have to consider deepcopy in python in order to safely make edits to the copied list without those changes being reflected in the original. For example, import copy a = ['Hello', 1, 2] b = copy.deepcopy(a) a[0] = 5 print(b) # This will give the original list >> ['Hello', 1, 2] This is the documentation that will be helpful. Another way to get around as @Sophie mentioned in the comment is slicing, b= a[:] a[0] = 5 print(a) >> [5, 1, 2] print(b) >> ['Hello', 1, 2] In your second case, it is just a reference to a variable as you mentioned in the question.
H: Slice rows in R based on column value I have multi-touch attribution data like: medium conversion 1 organic 0 2 (none) > referral > referral > (none) > (none) > referral 0,0,0,0,0,0 3 (none) 0 4 organic > referral > referral 0,1,0 5 referral > referral > referral > referral 0,0,1,0 6 organic > referral > referral > (none) > referral 0,1,0,1,0 I'd like to remove rows with no conversions (like rows 1, 2 and 3) and I tried grepl but couldn't make it work. For the remaining rows how to split rows so it ends with a conversion, e.g. row 4 will be organic > referral 0,1 and row 6 will split into organic > referral 0,1 and organic > referral > referral > (none) 0,1,0,1 AI: This can be done with packages from the dplyr which is part of the tidyverse made by Hadley Wickham. The stringr package (also made by Hadley) is really helpfull in working with vectors of strings. Another package purrr is helpfull for applying functions t lists. First, lets import the libraries and create data: library(tidyverse) library(stringr) df = tibble( medium = c("organic", "(none) > referral > referral > (none) > (none) > referral", "(none)", "organic > referral > referral", "referral > referral > referral > referral", "organic > referral > referral > (none) > referral"), conversion = c("0", "0,0,0,0,0,0", "0", "0,1,0", "0,0,1,0", "0,1,0,1,0") ) You probably have the data as strings while a list represention would be much easier to work with. The following code converts the strings to lists by splitting on ">" or "," (resp. for medium and conversion). Also read upon the %>% which is really handy for working with data frames. After converting the strings to lists the conversion column can be made boolean by mapping the == operator on each row. df <- df %>% mutate(medium = str_split(medium, ">"), conversion = str_split(conversion, ","), conversion = map(conversion, `==`, "1")) Removing the items after the first conversion is now a matter of simple indexing. A little magic is done with dplyr::lag and cumsum to get a boolean list that indicates all the steps before and during conversion. Then map is used to get (using the [ operator) all the mediums by boolean indexing with no_conversion_yet. df <- df %>% mutate(no_conversion_yet = map(conversion, function(x) dplyr::lag(cumsum(x) < 1, default=TRUE))) %>% mutate(medium = map(no_conversion_yet, medium, `[`)) Filtering the rows that do not have a conversion is now easy. Simple remove all the rows that do not have any TRUE value in the conversion column. df <- df %>% mutate(any_conversion = map_lgl(conversion, any)) %>% filter(any_conversion) Voila, life made easy by the superb packages of Hadley Wickham! (Also check his paper.)
H: How to determine threshold in Sigmoid function Context: I picked up data-set from here and tried to run Logistic Regression on it. Since I am not very much aware of MATLAB, I converted "Strings" to "Numbers" with my own using "NUMBERS" software. What I want to achieve: After running the LR algorithm when I tried to predict the value of existing data points, I am getting values ranging between 0-1 (as it should be), but since my job it to predict whether it is either 0 (yes) or 1 (no), that means I need to find a cut-off line (threshold) in my prediction (This could probably be done by comparing actual value by predicted value). Question: How can I figure out the threshold for predicted result such that result is assumed to be 1 if predicted value > threshold, otherwise 0? Predicted values can be found here. I am assuming predicted values are correct as cost curve is showing asymptotic nature. I have pushed my work here, you may want to cross-validate and provide me few more key points. AI: As per Andrew Ng's Course, if you use the sigmoid activation, the outputs represent the probability of either outcome 0 or outcome 1. So the decision boundary is 0.5 if prediction > 0.5 , the prediction is 1 if prediction <= 0.5 , the prediction in 0 Here's a screenshot from Andrew Ng's slides: I have gone through your code and results. There seems to be something wrong with the implementation as none of your predictions give a value greater than 0.5 . I couldn't pin down the problem. Some debugging will be needed on your side. What's paradoxical is that your loss is reducing. I suspect this might be because your data set is unbalanced i.e., you have 221 0s and about 30 1s. This could be the reason for other problems as well. Consider the wikipedia example, where the values are correctly matching. If you manage to find the error or the solution conclusively, please post it here so that we can all learn. Hope this helps!
H: Generation/Synthesis of Data with CNNs Is it possible to use a trained CNN to generate data? After training on data of X and corresponding Ys, given a new Y to generate X? Or do I have to use GANs or RNNs? I'm still pretty new to the subject so I would be happy to have some leads, if I have to go in this direction. Currently I am learning tensorflow and trying to implement the "Convolutional Sequence to Sequence Learning" paper Clarification: I want to use a CNN to generate sequences of text. I have a reallyb big database of descriptions and also one of 1000 classes, and wanted to try it out with CNNs (since im coming from image classification) AI: In general, if you want to generate data in X, or X, Y pairs, then you should start by training a generative model - as opposed to a discriminative model, which is what most NN classifiers are. There are many types of generative model. Variational Autoencoders (VAE)and Generative Adversarial Networks (GAN) have been demonstrated recently with interesting results on images - although both take a lot of training data and time, and are limited to relatively small image dimensions (e.g. 128x128). There are many sub-types of these two designs, including a combined VAEGAN which attempts to combine strengths of both. With a trained CNN, you can generate data - sort of. What you can do is start with some arbitrary value of X and Y, then use back propagation to calculate gradients of a cost function. But instead of using the gradients to update weights, you back propagate all the way to the input, and use the gradients at the input to alter X, repeating the process multiple times. This is essentially how Deep Dream and Style Transfer work (although in general these don't use a Y value, but selected activation values within layers). There is a major caveat to this approach - your generated X will not be sampled evenly from any distribution of X that the network has been trained with. Instead you will generate a "super stimulus" X for the given Y. You mention RNNs. One way these can be used to generate X is by sampling from their output and feeding this back into the input. For text sequences this tends to generate grammatically correct nonsense. I am not sure if this would be considered a strictly generative model, since it is not clear to me whether the input X is being sampled evenly. It is likely that you could use the approach to generate images too, although you would have to take care defining what the sequence is (just a sequence of pixels line-by-line will probably not produce any recognisable image).
H: Identifying important interactions between features using machine learning Let's say I have a set of features: a, b, c, d, e, f. I'm now interested in identifying possible interactions between these features that best predict an outcome. For example, it could be that the features a, f, and the interactions a:b:g, d:f and c:e are the 5 most important factors that predict the outcome. It is not only important for an algorithm to account for feature interactions, but I also want to be able to identify these interactions. How could I approach this problem with machine learning? AI: I think that it is important, in this situation, to ask yourself why you are using machine learning to detect the interactions. It feels a bit like data dredging. Using domain knowledge to think about what interactions would be likely or feasible given the phenomenon you are studying may well serve you well. That said, you can fit a tree model using, for instance, R. This will allow you to detect complex and high order interactions. You can see this as an example on page 30 here. Note that I would probably use partykit for fitting the tree model because it allows you to use non-normal distributions. This method, though, works best if you have a very large data set and you may struggle to fit them in a linear model once you have found them (because they can only occur on one side of the tree). An alternative method that addresses some of the limitations above is to use a technique called Additive Groves. These work on the principle of observing the relative performance of differently restricted tree models. Because interaction effects are not additive, this technique can allow for the identification of interactions.
H: Orange 3 "Find Informative Projections" and target variables? Having issues with the "find informative projections" feature in Orange 3. In order to be able to use this feature in the scatterplots, you have to select a target variable in the "select columns". So I choose the value that I'm most interested in finding correlations to. However, whatever I choose as a target variable doesn't show up in the informative projections list as being correlated to other features in my data set. Why not? I have a lot of features in my data and was under the impression that the "find informative projections" option finds pairs of features that seem to have a linear correlation and sorts them by which pairs are most correlated. If that's the case, why do you need a target variable? How do I choose an appropriate target variable? Am I misunderstanding the purpose of the "find informative projections" feature? I'm attaching a pic of my work flow. AI: As you said, "find informative projections" gives you the best pair of features ("score plots") to explain the target variable. The two best features will be on the x- and y-axis, while your target variable will be the color (hue for numeric, categorical otherwise). Here is an example for the Iris dataset. Where "iris" is the target --> color, and petal length and petal width are the most informative features, followed by petal width and sepal width, and so on.
H: Is it valuable to normalize/rescale labels in neural network regression? Have there been any papers, or does anyone have any specific experience to know whether normalizing labels in a regression problem is likely to improve the performance of a neural network? I have labels that are in the range (0,1000) applying square loss in a ConvNet. I want to know if it might be useful to normalize these to a (0,1) range, or whether that's known to not matter. AI: Yes, you should do this. Given the initialization schemes and normalized inputs, the expected values for the outputs are 0. This means that you will not be too far off from the start, which helps convergence. If your target is 1000, your mean squared error will be huge which means your gradients will also be huge which can lead to numerical instabiliy.
H: Sales Forecasting - Random Forest - Which features should I use for out of sample forecasting? Sorry for the bad title, I can't find a good one. So I will try to explain what I'm looking for. I'm doing sales forecasting with a Regression Forest. (Spark - Scala for the technology) I've worked on some test data and I did my forecast using training data. But some of the features which I have used can't be employed to forecast the future as they would not be known to me at any given time. For example the numbers of customers of a day, their categories, what kind of advantage they have etc. Do I have to find others features that will be as useful as these ones or Do I need to perform prediction on these features before my sales forecasting and use the predictions? Are there any another solutions? Also, what kind of algorithms should I use for the "features forecasting"? AI: The question is poorly phrased, I've tried to edit it to the best of my abilities. However here are the problems you've stated, Some of the features which are being used now can't be used later because they might not be known. Will it affect the model? If they can be used, what type of algorithm can be chosen? The answer to the first problem, you have to check the accuracy first before making choosing any new features if the rest of your features give a good enough accuracy then there is no need to choose new features. The second problem, to predict those values of the features you are using now if they are discrete in nature try classification algorithms whichever fits the model best, else try something along the lines of regression if the input values are continuous. And then use these predicted values along with your existing model and check how the accuracy varies.
H: What is the meaning of hand crafted features in computer vision problems? Are these the features which are manually labelled by humans? or Is there any technique for obtaining these features. Is this related to learned features? AI: "Hand Crafted" features refer to properties derived using various algorithms using the information present in the image itself. For example, two simple features that can be extracted from images are edges and corners. A basic edge detector algorithm works by finding areas where the image intensity "suddenly" changes. To understand that we need to remember that a typical image is nothing but a 2D matrix (or multiple matrices or tensor or n-dimensional array, when you have multiple channels like Red, Green, Blue, etc). In the case of an 8-bit gray-scale image (or a "black and white" image, although this latter definition is not quite accurate) the image is typically a 2D matrix with values ranging from 0 to 255, with 0 being completely black and 255 being completely white. Now imagine an image of a blackboard set against a totally white wall. As we move left-to-right in the image the values in one of the rows of the matrix might look like 255-255-255... since we will be "moving" along the wall. However, when we are about to hit the blackboard in the image it might look like 255-255-0-0-0...As you might have guessed, the blackboard "begins" in the image where the zeros start. In other words, the "intensity" of the image along the "x" axis has dropped rather suddenly (or a very large negative gradient along x), which means a typical edge detector will consider it to be a good candidate for an edge. The algorithm that we just saw is only the very basic of algorithms, and others like Harris corners and Hogg detectors use slightly more "sophisticated" algorithms. Actually, even the Canny edge detector does a lot more than I what I just described, but that is besides the point. The point is that once you understand that an image is nothing more than a data matrix, or an n-dimensional array, the other algorithms are not that difficult to understand either. As regards your last question: Is this related to learned features? The "handcrafted features" were commonly used with "traditional" machine learning approaches for object recognition and computer vision like Support Vector Machines, for instance. However, "newer" approaches like convolutional neural networks typically do not have to be supplied with such hand-crafted features, as they are able to "learn" the features from the image data. Or to paraphrase Geoff Hinton, such feature extraction techniques were "what was common in image recognition before the field became silly".
H: How to model to predict hotel booking abnomality? We are trying to build a model, gathering specific hotels booking data, try to find the pattern how the hotel is booked, stayed, what type of people lived, how many bookings average per day. The bookings might vary from weekdays to weekend, also from winter to summer, normal days and vacation period. All these factors are accountable. Then, as the time passes by, we want to know if the booking becomes abnormal, e.g. normally, young couple books the hotel quite a lot, all the sudden, a bunch of business men checked in for a couple of days. Since in this case, we don't have samples/labels for normality and abnormality, we started thinking use unsupervised learning, like clustering for a start. Say, we construct a sample(booking features for every week) going back all the way to the beginning of the year. And then, we try to cluster them, then, every week, we calculate the current week, and see if it belongs to any clusters? or it is an abnormal point stands out required attention. Is this a reasonable approach or there are some better ways? AI: Look at the distribution of the features you want to consider for anomalies (e.g., user attributes) conditioned on the date, so you do not trigger a warning for normal seasonality. An anomaly then is when the current conditional distribution is significantly different from the historical average. For more information look into contextual anomaly detection. Welcome to the site and good luck!
H: Do Clustering algorithms need feature scaling in the pre-processing stage? Is feature scaling useful for clustering algorithms? What type of features, I mean numeric, categorical etc., are most efficient for clustering? AI: Clustering algorithms are certainly effected by the feature scaling. Example: Let's say that you have two features: weight (in Lbs) height (in Feet) ... and we are using these to predict whether a person needs a 'S' or 'L' size shirt. We are using weight+height for that, and in our trained set let's say we have two people already in clusters: Adam (175Lbs+5.9ft) in 'L' Lucy (115Lbs+5.2ft) in 'S'. We have a new person - Alan (140Lbs+6.1ft.), and your clustering algo will put it in the cluster which is nearest. So, if we don't scale the features here, the height is not having much effect and Alan will be allotted in 'S' cluster. So, we need to scale it. Scikit Learn provides many functions for scaling. One you can use is sklearn.preprocessing.MinMaxScaler.
H: Using TF-IDF with other features in scikit-learn What is the best/correct way to combine text analysis with other features? For example, I have a dataset with some text but also other features/categories. scikit-learn's TF-IDF vectorizer transforms text data into sparse matrices. I can use these sparse matrices directly with a Naive Bayes classifier for example. But what's the way to also take into account the other features? Should I de-sparsify the tf-idf representation of the text and combine the features and the text into one DataFrame? Or can I keep the sparse matrix as a separate column for example? What's the correct way to do this? AI: scikit-learn's FeatureUnion concatenates features from different vectorizers. An example of combining heterogeneous data, including text, can be found here.
H: Understanding predict_proba from MultiOutputClassifier I'm following this example on the scikit-learn website to perform a multioutput classification with a Random Forest model. from sklearn.datasets import make_classification from sklearn.multioutput import MultiOutputClassifier from sklearn.ensemble import RandomForestClassifier from sklearn.utils import shuffle import numpy as np X, y1 = make_classification(n_samples=5, n_features=5, n_informative=2, n_classes=2, random_state=1) y2 = shuffle(y1, random_state=1) Y = np.vstack((y1, y2)).T forest = RandomForestClassifier(n_estimators=10, random_state=1) multi_target_forest = MultiOutputClassifier(forest, n_jobs=-1) multi_target_forest.fit(X, Y).predict(X) print(multi_target_forest.predict_proba(X)) From this predict_proba I get a 2 5x2 arrays: [array([[ 0.8, 0.2], [ 0.4, 0.6], [ 0.8, 0.2], [ 0.9, 0.1], [ 0.4, 0.6]]), array([[ 0.6, 0.4], [ 0.1, 0.9], [ 0.2, 0.8], [ 0.9, 0.1], [ 0.9, 0.1]])] I was really expecting a n_sample by n_classes matrix. I'm struggling to understand how this relates to the probability of the classes present. The docs for predict_proba states: array of shape = [n_samples, n_classes], or a list of n_outputs such arrays if n_outputs > 1. The class probabilities of the input samples. The order of the classes corresponds to that in the attribute classes_. I'm guessing I have the latter in the description, but I'm still struggling to understand how this relates to my class probabilities. Furthermore, when I attempt to access the classes_ attribute for the forest model I get an AttributeError and this attribute does not exist on the MultiOutputClassifier. How can I relate the classes to the output? print(forest.classes_) AttributeError: 'RandomForestClassifier' object has no attribute 'classes_' AI: Assuming your target is (0,1), then the classifier would output a probability matrix of dimension (N,2). The first index refers to the probability that the data belong to class 0, and the second refers to the probability that the data belong to class 1. These two would sum to 1. You can then output the result by: probability_class_1 = model.predict_proba(X)[:, 1] If you have k classes, the output would be (N,k), you would have to specify the probability of which class you want.
H: Data Visualization Tool recomendations I need a tool that lets me create a solution that fullfills the following requirements: Needs to be interactive and user friendly: Possibility to apply filters, change dimensions, etc.. Needs to be updated online: Must be able to process data in real time. Needs to be able to handle huge ammount of data and support big data technologies. Bonus points for python or R libraries but its definetely not a must. I was thinking on getting started with D3py or google charts but I dont know if they are the best options. What would you recommend and why? AI: Some tools/software are listed below. Most of these can do what you want but require input in other areas such as in development of a UI or the tool may have a learning curve to overcome. Google Chart: I guess this sets the benchmark. Really user friendly, has a large chart gallery and can process data in real time. Tableau: Has two variants "server" and "cloud version". Can handle big data but may require a licence depending on your domain D3: A JS library for data viz and is light weight. User friendliness and interactive nature depends on how you design the UI. Can also display data in real time. Fusion Charts: Another JS library for web and mobile devices. Deals with data in XML of JSON format. Requires a licence. Qlikview: One of Tableau's competitors. Highly customisable, so there may be a learning curve to get accustomed with the tool. Microsoft Power BI: Surprisingly good. Ticks all your boxes.
H: Deep Neural Network using Keras/Tensorflow solves Spiral Dataset Classification. But Accuracy is stuck around 50% I have created a deep neural network that solves the spiral dataset classification problem. However, when measuring the performance, the accuracy goes up and down but always stays at around 50% - which is of course very bad. The image below shows loss and accuracy of 100 epochs of training. How can I fix this? I have done research and I don't see where the error is in my code. Is the error in the architecture of my network? My code: # Make sure we have the required libraries loaded library(keras) library(tensorflow) library(ggplot2) # Load the data spiralData = read.table("spiral.data", header=TRUE) # Visualize the data qplot(x, y, data = spiralData, colour = label) # Store the data in features and labels. x<-c(spiralData$x) y<-c(spiralData$y) features <- matrix(c(x,y),nrow=length(x)) labels <- matrix(spiralData$label) # Create model. model <- keras_model_sequential() # Add layers and compile the model. # Our model consists of 4 hidden layers, each with 6 neurons. model %>% layer_dense(units = 6, activation = 'tanh', input_shape = c(2)) %>% layer_dense(units = 6, activation = 'tanh') %>% layer_dense(units = 6, activation = 'tanh') %>% layer_dense(units = 6, activation = 'tanh') %>% layer_dense(units = 1, activation = 'sigmoid') %>% compile( optimizer = 'rmsprop', loss = 'binary_crossentropy', metrics = c('accuracy') ) # Train the model, iterating on the data in batches of 32 samples. # Also, visualize the training process. model %>% fit(features, labels, epochs=100, batch_size=32) # Evalute the model score = model %>% evaluate(features, labels, batch_size=32) print(score) My "spiral.data" dataset: x y label 1 0 1 -1 0 0 0.971354 0.209317 1 -0.971354 -0.209317 0 0.906112 0.406602 1 -0.906112 -0.406602 0 0.807485 0.584507 1 -0.807485 -0.584507 0 0.679909 0.736572 1 -0.679909 -0.736572 0 0.528858 0.857455 1 -0.528858 -0.857455 0 0.360603 0.943128 1 -0.360603 -0.943128 0 0.181957 0.991002 1 -0.181957 -0.991002 0 -3.07692e-06 1 1 3.07692e-06 -1 0 -0.178211 0.970568 1 0.178211 -0.970568 0 -0.345891 0.90463 1 0.345891 -0.90463 0 -0.496812 0.805483 1 0.496812 -0.805483 0 -0.625522 0.67764 1 0.625522 -0.67764 0 -0.727538 0.52663 1 0.727538 -0.52663 0 -0.799514 0.35876 1 0.799514 -0.35876 0 -0.839328 0.180858 1 0.839328 -0.180858 0 -0.846154 -6.66667e-06 1 0.846154 6.66667e-06 0 -0.820463 -0.176808 1 0.820463 0.176808 0 -0.763975 -0.342827 1 0.763975 0.342827 0 -0.679563 -0.491918 1 0.679563 0.491918 0 -0.57112 -0.618723 1 0.57112 0.618723 0 -0.443382 -0.71888 1 0.443382 0.71888 0 -0.301723 -0.78915 1 0.301723 0.78915 0 -0.151937 -0.82754 1 0.151937 0.82754 0 9.23077e-06 -0.833333 1 -9.23077e-06 0.833333 0 0.148202 -0.807103 1 -0.148202 0.807103 0 0.287022 -0.750648 1 -0.287022 0.750648 0 0.411343 -0.666902 1 -0.411343 0.666902 0 0.516738 -0.559785 1 -0.516738 0.559785 0 0.599623 -0.43403 1 -0.599623 0.43403 0 0.65738 -0.294975 1 -0.65738 0.294975 0 0.688438 -0.14834 1 -0.688438 0.14834 0 0.692308 1.16667e-05 1 -0.692308 -1.16667e-05 0 0.669572 0.144297 1 -0.669572 -0.144297 0 0.621838 0.27905 1 -0.621838 -0.27905 0 0.551642 0.399325 1 -0.551642 -0.399325 0 0.462331 0.500875 1 -0.462331 -0.500875 0 0.357906 0.580303 1 -0.357906 -0.580303 0 0.242846 0.635172 1 -0.242846 -0.635172 0 0.12192 0.664075 1 -0.12192 -0.664075 0 -1.07692e-05 0.666667 1 1.07692e-05 -0.666667 0 -0.118191 0.643638 1 0.118191 -0.643638 0 -0.228149 0.596667 1 0.228149 -0.596667 0 -0.325872 0.528323 1 0.325872 -0.528323 0 -0.407954 0.441933 1 0.407954 -0.441933 0 -0.471706 0.341433 1 0.471706 -0.341433 0 -0.515245 0.231193 1 0.515245 -0.231193 0 -0.537548 0.115822 1 0.537548 -0.115822 0 -0.538462 -1.33333e-05 1 0.538462 1.33333e-05 0 -0.518682 -0.111783 1 0.518682 0.111783 0 -0.479702 -0.215272 1 0.479702 0.215272 0 -0.423723 -0.306732 1 0.423723 0.306732 0 -0.353545 -0.383025 1 0.353545 0.383025 0 -0.272434 -0.441725 1 0.272434 0.441725 0 -0.183971 -0.481192 1 0.183971 0.481192 0 -0.0919062 -0.500612 1 0.0919062 0.500612 0 1.23077e-05 -0.5 1 -1.23077e-05 0.5 0 0.0881769 -0.480173 1 -0.0881769 0.480173 0 0.169275 -0.442687 1 -0.169275 0.442687 0 0.2404 -0.389745 1 -0.2404 0.389745 0 0.299169 -0.324082 1 -0.299169 0.324082 0 0.343788 -0.248838 1 -0.343788 0.248838 0 0.373109 -0.167412 1 -0.373109 0.167412 0 0.386658 -0.0833083 1 -0.386658 0.0833083 0 0.384615 1.16667e-05 1 -0.384615 -1.16667e-05 0 0.367792 0.0792667 1 -0.367792 -0.0792667 0 0.337568 0.15149 1 -0.337568 -0.15149 0 0.295805 0.214137 1 -0.295805 -0.214137 0 0.24476 0.265173 1 -0.24476 -0.265173 0 0.186962 0.303147 1 -0.186962 -0.303147 0 0.125098 0.327212 1 -0.125098 -0.327212 0 0.0618938 0.337147 1 -0.0618938 -0.337147 0 -1.07692e-05 0.333333 1 1.07692e-05 -0.333333 0 -0.0581615 0.31671 1 0.0581615 -0.31671 0 -0.110398 0.288708 1 0.110398 -0.288708 0 -0.154926 0.251167 1 0.154926 -0.251167 0 -0.190382 0.206232 1 0.190382 -0.206232 0 -0.215868 0.156247 1 0.215868 -0.156247 0 -0.230974 0.103635 1 0.230974 -0.103635 0 -0.235768 0.050795 1 0.235768 -0.050795 0 -0.230769 -1e-05 1 0.230769 1e-05 0 -0.216903 -0.0467483 1 0.216903 0.0467483 0 -0.195432 -0.0877067 1 0.195432 0.0877067 0 -0.167889 -0.121538 1 0.167889 0.121538 0 -0.135977 -0.14732 1 0.135977 0.14732 0 -0.101492 -0.164567 1 0.101492 0.164567 0 -0.0662277 -0.17323 1 0.0662277 0.17323 0 -0.0318831 -0.173682 1 0.0318831 0.173682 0 6.15385e-06 -0.166667 1 -6.15385e-06 0.166667 0 0.0281431 -0.153247 1 -0.0281431 0.153247 0 0.05152 -0.13473 1 -0.05152 0.13473 0 0.0694508 -0.112592 1 -0.0694508 0.112592 0 0.0815923 -0.088385 1 -0.0815923 0.088385 0 0.0879462 -0.063655 1 -0.0879462 0.063655 0 0.0888369 -0.0398583 1 -0.0888369 0.0398583 0 0.0848769 -0.018285 1 -0.0848769 0.018285 0 0.0769231 3.33333e-06 1 -0.0769231 -3.33333e-06 0 Visualized, the dataset looks like this: AI: Your network is actually working, it just takes a lot of epochs to learn the spiral. In fact you can see from your learning curves that learning is still occurring, just not much per epoch. Try 60,000 epochs . . . when I try your model (in Python, but still same data and model) using 60,000 epochs I get loss under 0.0001 and accuracy of 100% reliably. There are a few factors involved in why you need this amount of iteration: The data set size is small, which means you get less updates to weights per epoch. You need to compensate by increasing the number of iterations. You have a "starved" network topology that can just about learn the spiral, but needs to be quite precisely optimised before it starts performing well. You could increase the number of neurons per hidden layer slightly. Or maybe adding another hidden layer: If I add one more hidden layer, size 6, tanh activation, the network learns 100% accuracy in under 20,000 epochs. If instead, I increase the original four hidden layers to size 16, the network learns 100% accuracy in under 10,000 epochs. tanh is not optimal for a deep network, because the gradient diminishes in the deeper parts of the model. RMSProp will compensate for that, but still changing to relu will improve convergence speed. If I use the four-hidden-layer model, with layer size 16 and relu activation, the network converges to 100% accuracy on the training data set in around 2000 epochs. import pandas as pd import numpy as np np.random.seed(4375689) from keras.models import Sequential from keras.layers import Dense from keras.optimizers import RMSprop train_data = pd.read_csv('spirals.csv').values train_X = train_data[:,0:2] train_y = train_data[:,2] model = Sequential() model.add(Dense(16, activation='relu', input_shape=(2,))) model.add(Dense(16, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(16, activation='relu')) model.add(Dense(1, activation='sigmoid')) model.compile(loss='binary_crossentropy', optimizer=RMSprop(), metrics=['accuracy']) history = model.fit(train_X, train_y, batch_size=32, epochs=2000, verbose=0) score = model.evaluate(train_X, train_y, verbose=0) print(score)
H: Approach to classify spatial trajectories of vessels I'm trying to create a classifier to distinguish different boats by their trajectories. I have training data of the longitude and latitude of a boat and time in seconds. Vessels like a ferry will have a straight predictable trajectory between two points, whereas fishing vessels can have zig-zag like trajectories for example. My initial approach is to create features for example the mean speed, standard deviation of the speed, standard deviation on the course, such that each trajectory table is distilled into 1 row of features. Then I can train something like a random forest classifier on these rows. Is this a good approach, any other suggestions that could account for the characteristic trajectory shapes. Thanks AI: Your data contains spatial and temporal data, which means that you can account for location and speed. Not sure how global the coordinates are, but perhaps grouping them per region might be interesting. It would be like putting all coordinates that you're given and defining zones of interests (zone A, B, C, etc.). Perhaps, some boats only sail in specific regions. Indeed, yachts might be found more often in the Caribbeans than in the Arctic sea. Of course, the number of zones you want to define needs to be fine-tuned. Assuming that your data accounts for an entire trip from port A to B, you could, of course, use the length of the journey as a feature, as I'm assuming that not all boats travel the same distances. You could also use the overall direction of the trip as the angle between port A and port B. You've already accounted for speed by computing the mean, standard deviation, and other metrics. What you could also do is fit a polynomial of the speed, and use its coefficients as features. Perhaps certain boats have a more steady speed than other boats. Following the same thought, you could perhaps fit a polynomial describing the course of each boat and use its coefficients as features. View the course here as the change of angle. This should capture how much "zig zagging" a boat does. Of course, you'll need to fine-tune to pick the right polynomial. Good luck!
H: how can I used tf.argmax in multi-dimensions tensor? I want to know who can I use the tf.argmax in multi-dimension tensor in the Tensorflow using convolution neural network. bellow is simple example to explain what I want exactly:: 1-D tensor example:: a = [0,1,2,3] logit_a = tf.one_hot(indices = a, depth = 4) out_a = tf.argmax(logit_a) the out_a will be like this [0,1,2,3] 2-D tensor example:: b = [[0,1,2,3],[4,5,6,7]] logit_b = tf.one_hot(indices = b, depth = 8) out_b = tf.argmax(logit_b) out_b will give me this:: array([[0, 0, 0, 0, 1, 0, 0, 0], [0, 0, 0, 0, 0, 1, 0, 0], [0, 0, 0, 0, 0, 0, 1, 0], [0, 0, 0, 0, 0, 0, 0, 1]]) but what I want is [0,1,2,3,4,5,6,7] or [[0,1,2,3],[4,5,6,7]] how can I do that?? AI: Your problem is that you are not specifying the axis that you want to convert your tensor into a one hot tensor vector with so it's defaulting to looking at all elements at once, making logic_b of shape (2, 4, 8) when really what you want it to be is of shape (8, 2, 4). See below: session = tf.InteractiveSession() a = tf.constant([0,1,2,3],[4,5,6,7]) logit_b = tf.one_hot(a, depth = 8, axis = 0) out_b = tf.argmax(logit_b, axis = 0) out_b.eval() #array([[0, 1, 2, 3], # [4, 5, 6, 7]])
H: Which methodologies can be used in machine learning research? I am investigating some machine learning algorithms (Perceptron and KNN) and I'm confused in the methodology section of my report. I am evaluating the performance of the 2 algorithms. Which methodology should I use? AI: Typically, the methodology is to determine the prediction capability of the models after the learning on a validation set. The algorithm with the highest prediction accuracy wins.
H: Are parametric method and supervised learning exactly the same? Parametric methods make explicit assumptions about the functional form of $f$, and supervised learning are pretty the same right? It's a way to build a statistical model for predicting, or estimating, an output based on one or more inputs. If they are the same, why would they have to create different names. Is there a reason behind this? AI: Parametric methods in simple terms follow a particular distribution. The most common example would be that of Normal Distribution, where 64 percent of the data is situated around +-1 standard deviation from the mean. The essence of this distribution is the arrangement of values with respect to their mean. Similarly, other methods such Poisson Distribution etc have their own unique modeling technique. Parametric Estimation might have laid the foundation to some of the most vital parts of Machine Learning but it is an absolute mistake to think supervised learning is the same thing. Supervised Learning may include approaches to fit the aforementioned parametric models but this might not always be the case. More often, the data scatter is quite spread out. It might not just be fitting one parametric model but a hybrid of more than one. Supervised Learning also takes into account the error which most parametric models don't consider unless incorporated manually. You could say that supervised learning is an evolved and more robust version of parametric methods which is highly flexible.
H: Classifying encrypted images I know this question is rather broad but hopefully on topic. Are there useful references on classifying encrypted images. For example classifying cat/no cat on encrypted images. Is there efficient and accurate software for this task? Thank you! EDIT for clarification: What I mean is the following: We have an encryption method E. A classifier is trained on encrypted images with their labels. Then at run time an image is sent in encrypted form from the client to the server and classified. This is similar to: ML Confidential: Machine Learning on Encrypted Data https://eprint.iacr.org/2012/323.pdf AI: Very briefly, homomorphic encryption as used by the linked paper in the question works as follows (excuse ad-hoc notation). Define an encryption method $E(x,k)$ that takes some data $x$ and secret key $k$ that has the following properties: There is a decryption method $D( E(x,k), k ) = x$ (optionally the two $k$ here can be different) You can define operations that work on the encrypted data in a self-consistent manner, e.g. $\text{Add}_E(E(x,k),E(y,k)) = E(x+y,k)$ - note that the result is still encrypted, and you need to know $k$ in order to use method $D()$ and find out what $x+y$ is. You may also need to support operations like $\text{Add}_E(E(x,k),y) = E(x+y,k)$ e.g. where one of the values is normal numeric variable - this might be useful for initialising a model - or using an existing trained model - but the catch is that when training you cannot keep these values unencrypted, once they use updated due to training results, they will also be encrypted. Note that no operation can output a decrypted value without using the key, otherwise the whole scheme is insecure, it implies some backdoor to get decrypted data. For ML use, you need an encryption scheme E that supports multiple basic operations, e.g. $\text{Add}_E$, $\text{Subtract}_E$, $\text{Multiply}_E$, $\text{GreaterThan}_E$ etc. You need some minimal set that is enough to build a model. You can then build variant models that instead of performing arithmetic to sum/multiply etc numeric values, work with the abstract operations on encrypted data. The models would otherwise look just like their unencrypted counterparts - e.g. you could quickly build a linear regression using only a few types of operation. Given that you are replacing simple fast floating point add, multiply operations that are built into the processor with a more complex custom operation, this significantly affects performance compared to the unencrypted model. How much so depends on the encryption scheme, and how it is implemented. The paper shows in section 5, timings on unencrypted vs encrypted data and the difference is several orders of magnitude slower when encrypted. The paper was published in 2012, so it is possible that some improvements have been made here. However, on a deeper read of the subject I think that this is still at the stage of research proof-of-concept. I might be wrong, and there are nice workable implementations available that could be used in something as complex as a CNN, but I have not found anything. You also linked numer.ai from comments. Initially that looked interesting because surely they would have solved efficiency problems. But in fact their main competitions are using data obfuscation techniques, not encryption - a homomorphic encryption paper is linked from their main site, but it does not seem to be what they are using. End users are writing very familiar-looking scripts that perform logistic regression etc using regular Python (no special operators imported) I think you should take a second look at your wider problem and analyse your threat model. What precisely are you trying to protect against? If it is about keeping your own operations separate from client data, then you might be looking into company process and auditing solutions, rather than purely technical (you may still want to add technical solutions such as disk-level encryption to protect customer's data at rest in you data centre, in case someone gets into the centre physically and just grabs a disk containing all the cat images) For instance, look at Cloud Security Alliance which as well as having a certification scheme, has analysis showing how their recommendations map to other schemes such as UK's ISO27001. Note that this is typically significant capital investment including 1+ year project to implement, and usually undertaken by mid-sized or larger companies, when they want to work with government or large corporate's data. However, it is probably a more reliable and maybe still cheaper route than trying to research and build a technical solution involving models that process encrypted data at this time.
H: What is pandorable? I have seen the term pandorable but I cannot find any accurate definition for that. What does that mean? AI: It's a neologism for "fitting for pandas"; compare with "pythonic". As stated in the lecture transcript: A sort of sub-language within Python, Pandas has its own set of idioms. We've alluded to some of these already, such as using vectorization whenever possible, and not using iterative loops if you don't need to. Several developers and users within the Panda's community have used the term pandorable for these idioms.
H: Limits of Hellinger distance values I am calculating Hellinger distance for different vectors. I initially assumed that the value returned by it in in the range of 0 to 1. However for the following two vectors I received Hellinger score as 1.0488088481701514, which is > 1. vector_1 = [0.0,0.5,0.7] vector_2 = [1.0,0.0,0.0] Now, I am curious to know the range of Hellinger distance values. Please explain me why that value exceeded 1. AI: It is bounded by unity, but your first vector does not encode a probability mass function, since 0.5 + 0.7 > 1.0. If the 0.7 had been 0.5 or the 0.5 a 0.3, the distance would have been 1.0 since the distributions are maximally separated, having no overlap.
H: R Recommender System for very sparse matrices required I am trying to build a recommender system based on a large and very sparse matrix. Dimensions of that matrix would approximately be 12000 x 37000, possibly even more rows up to 100000. However, this matrix is extremely sparse. With the 12000x37000 version, about 0.053% of the matrix is non-NA. I've tried SVD, but alas, to no avail. To ensure I have not caused any error during my proceedings: I created a data.table with unique triplets of "User" - "Item" - "Rating". I should mention "Rating" can stretch anywhere from 0 to about 150. Now, I applied dcast.data.table to the triplet table, corrected the problem with the first value becoming a column, converted to a matrix. Now, I had a matrix with users as rows, items as columns and the rating as cell content. Split into Test and Validation set, replaced "NA" with 0, subtracted row means for every row, applied propack.svd from the "svd" package to that matrix, multiplied the three matrices delivered by propack and added the row means to it. (user-means). After that, I compared the values from the validation set to the corresponding values in my prediction matrix.. and no surprise, the Root mean square error was horribly high, around 6-7.(Mean of non-NA values is around 4.5). I've tried multiple variants of normalisation too, but I just could not get the RMSE below 5.8, ever. Is there any way to build a viable item recommender system for this dataset? Possibly via arules or clustering? AI: I would look into the Soft Imputation method that has an implementation in R. It uses iterative soft-thresholding to compute missing values. Calculations are done with a matrix class called "Incomplete" to deal large sparse matrices and allows for quick calculation of scaling/centering rows and columns. I've had good success using this completing a 10,000 by 10,000 very sparse matrix so I'd imagine it should do fairly well with your dataset.
H: Am I doing a log transformation of data correctly? I'm doing some exploratory data analysis on some data and I get these histograms: That looks like a candidate for a log transformation on the data, so I run the following Python code to transform the data: df["abv"].apply(np.log).hist() df["ibu"].apply(np.log).hist() plt.show() And I get this new plot of the transformed histograms: Am I correct that a log transform was ok to do in this case, and if so, what's the best way to interpret the results? AI: Yes, log transform seems a good solution for better interpretation. Overlap between these two datasets is really small, so, only by looking at the plot, you can say with high certainty, that they are significantly different from each-other.
H: Simple Q-Table Learning: Understanding Example Code I'm trying to follow a tutorial for Q-Table learning from this source, and am having difficulty understanding a small piece of the code. Here's the entire block: import gym import numpy as np env = gym.make('FrozenLake-v0') #Initialize table with all zeros Q = np.zeros([env.observation_space.n,env.action_space.n]) # Set learning parameters lr = .8 y = .95 num_episodes = 2000 #create lists to contain total rewards and steps per episode #jList = [] rList = [] for i in range(num_episodes): #Reset environment and get first new observation s = env.reset() rAll = 0 d = False j = 0 #The Q-Table learning algorithm while j < 99: j+=1 #Choose an action by greedily (with noise) picking from Q table a = np.argmax(Q[s,:] + np.random.randn(1,env.action_space.n)*(1./(i+1))) #Get new state and reward from environment s1,r,d,_ = env.step(a) #Update Q-Table with new knowledge Q[s,a] = Q[s,a] + lr*(r + y*np.max(Q[s1,:]) - Q[s,a]) rAll += r s = s1 if d == True: break #jList.append(j) rList.append(rAll) print "Score over time: " + str(sum(rList)/num_episodes) print "Final Q-Table Values" print Q The code runs well and I'm able to print my results, but here is where I'm having difficulties: a = np.argmax(Q[s,:] + np.random.randn(1,env.action_space.n)*(1./(i+1))) My question is, why are we multiplying by 1/(i+1)? Is this supposed to be an implementation of epsilon annealing? Any help is appreciated. AI: My question is, why are we multiplying by 1/(i+1)? Is this supposed to be an implementation of epsilon annealing? The code looks like a relatively ad-hoc* adjustment to ensure early exploration, and an alternative to $\epsilon$-greedy action choice. The 1/(i+1) factor is similar to decaying $\epsilon$, but not identical. $\epsilon$-greedy with the same decay factor might look like this: a = np.argmax(Q[s,:]) if epsilon/(1+math.sqrt(i)) > random.random(): a = random.randrange(0, env.action_space.n) The math.sqrt(i) is just a suggestion, but I feel that epsilon/(1+i) is probably too aggressive and would cut off exploration too quickly. It is not something I have seen before when studying Q-Learning (e.g. in David Silver's lectures or Sutton & Barto's book). However, Q-Learning is not predicated on using any specific action choice, it just needs enough exploration in the behaviour policy. For the given problem adding some noise to a greedy selection obviously works well enough. Technically for guaranteed convergence tabular Q-Learning needs infinite exploration over infinite time steps. The code as supplied does indeed do that because the noise is unbound from the Normal distribution. So there is always some small finite chance of selecting an action with a relatively low action-value estimate and refining that estimate later. However, the fast decay (1/episode number) and initial scaling factor for the noise are both hyperparameters that need tuning to the problem. You might prefer something more standard from the literature such as $\epsilon$-greedy, Gibbs sampling or upper-confidence-bound action selection (the example is quite similar to UCB, in that it adds to the Q-values before taking the max). * Perhaps the approach used in the example has a name (some variation of "Noisy Action Selection") but I don't know it, and could not find it on a quick search.
H: Keras loading images in incorrect format So I was working with the the vgg16 model for dogs vs cats classification and I noticed that keras is not loading images in correct color format. The code is as follows: import cv2 import matplotlib.pyplot as plt import numpy as np from PIL import Image from keras.preprocessing import image path='data/dogscats/sample/train/dogs/dog.1402.jpg' imgkeras=image.load_img(path) imgkeras=image.img_to_array(imgkeras) plt.imshow(imgkeras) plt.show() The output of the following code is Where as the original image is Can someone explain why is this happening? , also when the image is loaded through opencv and fed into vgg16 the predicted label is more accurate for this particular image than when it is loaded through keras as above,is the improper color format affecting that? AI: This is caused due to the img_to_array method which converts the image to a float32 array. x = np.asarray(img, dtype=K.floatx()) Matplotlib interprets NxMx3 uint8 array as a standard image (0..255 components) in which case there is no preprocessing. Otherwise the pixels are multiplied by 255(without checking the range) and then cast into uint8, which I guess leads to this behaviour. Check this:https://stackoverflow.com/questions/39925420/bizzare-matplotlib-behaviour-in-displaying-images-cast-as-floats To answer the second part of your question, I guess the imagenet competitors used OpenCV to load images in BGR format to train vgg16 and hence the pretrained weights work well with images opened in BGR format.
H: What to do after selecting a model with cross-validation? I have been building a neural network for classification. To select my best model. I have been using 10-Fold cross validation. and selected the network that gives the highest mean accuracy. Now that I have selected the best model, I want to use all the data I have to train this model because the amount of data I have is limited (I will merge training, dev and test data). My issue is that, when training with all the data, I don't know when to stop training. Training loss is not an indicator for sure. Usually, I have a development set that I use to monitor training. When the training loss does not improve anymore, I stop training. Any suggestions on how to supervise a model with only training data? In other words, how to tell when the network needs to stop? AI: When 'overtraining' is not a problem (as in it will not diverge if you use more time), just use all your data and the empirically found optimal hyper parameters. In case of neural networks this is not the case (although in my experience, a lot of architectures converge to a specific test error, and take a long time to end up diverging again). I see a few options that you could try: Most obvious one is keeping a (small) validation set around to use as indicator for early stopping (don't think of this as throwing away data, you still use it to train your network better) Use same weight initalization as one of your folds and run for the same amount of epochs, same initialization should make convergence rate more similar than new random initialization Keep all the cross validation models and use them in an ensemble instead of retraining the full model
H: Explaination or Description of clusters after clustering After clustering, is there a way to explain the clusters? Or get the boundaries of the clusters? For example: If we have a data set of people's spending habits with columns for their spend in different categories like groceries, clothing, transportation, rent etc. And we applied a clustering algorithm (like k-means or agglomerative clustering) on it. Can we get descriptions of clusters, like: Cluster 1 contains people who spend More than \$500 on groceries Less than \$200 on transportation Cluster 2 contains people who spend Less than \$100 on rent Less than \$300 on transportation More than \$50 on transportation Basically I need an explanation which is meaningful to a layman user. AI: It depends on the clustering technique you use. Since you tagged this post with k-means I will assume this is what you are using. Cluster centers should already be somewhat informative for laymans, but since you should be/are scaling this can lose some of it's interpretation. What you could do is assign class labels to each sample based on in what cluster they ended up in. Then you could fit a multi-class decision tree to your data and use the decision rules for interpretation, like 60% of cluster 1 has $x_1 < 0.9$.
H: Create similarity matrix I have a training set and a testing set of vectors. All the vectors are labeled. For each labeled vector in the testing set, there are 3 vectors in the training set with the same label. I'm using the cosine distance in order to calculate the similarity between the elements in the vectors. In the picture, we can see the results of applying the cosine distance similarity in a subset of 6 vectors from the testing set and 18 from the training set. Now, I would like to create a similarity matrix of the labels. So, in this case, I'd need a matrix of 6x6 dimension, but I am not sure how to transform this matrix of scores to a similarity matrix. AI: You have two scenarios here. The vectors with the same labels are close to each other. Then, what you can do is perhaps use the distance between the two closest/furthest elements in the groups with label x and the group with label y. You could also create an average vector for each label, and then simply get the distance between these. Your vectors are not close to each other This is not an issue. Here, what you'll want to see, is if for instance, for each vector with label x, there is a vector with label y close to it. To do that, you could use the Earth Mover's Distance which gives you a single score when you are trying to see how far two "groups of things" are.
H: Why use the .idx data format? The MNIST handwritten digit dataset uses a file format .idx. What are the advantages of this file format over alternatives such as CSV, TSX and ODS? AI: Generally, you will find datasets being distributed in CSV format for their simplicity and human readable format that you could ingest in any programming language with just the packages that the language is shipped with. Usually, tabular data is exported in CSV format and that is one of the reasons why MNIST dataset is not provided in CSV format. Here is the quote from LeCun's website for storing the dataset in idx format. The data is stored in a very simple file format designed for storing vectors and multidimensional matrices. In terms of performance, binary file formats fairs better compared to text file formats like CSV or rich text format like ODS. Following are some of the binary file formats that are widely used. Avro format Parquet format Optimized Row Columnar (ORC) Protocol Buffers (protobuf) These file formats support data compression, stores data type metadata to serialize and de-serialize data effeciently.
H: Why does an SVM model store the support vectors, and not just the separating hyperplane? In every explanation of SVMs, we're shown how training finds a hyperplane that best separates the data. Presumably then for inference, you just check which side of the plane a point is on. However, all the "disadvantages of SVMs" posts [1, 2] lament that SVM models are large and slow because they end up storing most of the data as support vectors. Why would SVMs store any of the data, rather than just the (coefficients of the) separating hyperplane? (And what is a "support vector" in the soft-margin case, when points of both classes are scattered on both sides of the hyperplane, anyway?) AI: The hyperplane is a linear combination of the support vectors. In the soft margin case, there is only a limited amount of slack; every input does not get to be support vector. In the nonlinear case, the separating hypersurface may be embedded in an infinite-dimensional space, making it impossible to store. To borrow from the Wikipedia article, the normal vector $w$ is given by $$w = \sum_i c_i y_i \phi(x_i)$$ where $\phi$ is the feature embedding function, and $c_i$ is a Lagrangian dual variable that is zero for points on the correct side of the margin. Instead, test points are classified through a kernel function $k(x_i,x_j) = \left< \phi(x_i), \phi(x_j) \right>$ like so: $$x \to \mathrm{sgn}(\left<w , \phi(x)\right> + b) \equiv \mathrm{sgn} \left( b+\sum_i c_i y_i k(x_i, x)\right)$$ Notice how we avoided explicitly calculating $w$.
H: How to rank a feature importance? If I trained a network using Neural Network classifier, how can I know which feature was most important for predicting the target variable? I mean how to create a "feature ranking" among the features (from high importance value to low). I have seen some literature about decision trees/AdaBoost but I am typically interested in Neural networks especially for classification purpose. To make it more clear, an example is shown in the figure. AI: The firs solution sevo proposes is not feasible because of a third problem that was not mentioned. The first layer only learns a first representation of the input, which is used in later layers. Even if the absolute weights of $x_1$ might be very big, if the later layers have small weights connected to these neurons the importance goes down. This is exactly why neural networks are considered to be difficult to interpret. The rest of the answer is useful, I just wanted to add this.
H: Early stopping and bounds Say I am training neural networks using a train set and set aside a validation set V. I obtain models h's after each epoch along with the validation losses(0-1 loss) $\hat{L}(h_1,V)$, $\hat{L}(h_2,V)$ ... if I use the early stopping rule suggested here(top answer). Is the resulting $\hat{L}(h_*,V)$ an unbiased estimate of the true loss? How can I bound the true loss using $\hat{L}(h_*,V)$ ? I'm guessing no for the first one since the stopping rule depends on the partition of my data set. Afaik the bounds that can be applied depends on the size of my hypothesis set and I'm not entirely sure if it's finite, countable or uncountable in this case. AI: Is the resulting $\hat{L}(h_*,V)$ an unbiased estimate of the true loss? No. You have taken multiple measurements, each with some uncertainty, and chosen the maximum or minimum value. How can I bound the true loss using $\hat{L}(h_*,V)$ ? In the general case you cannot. It will depend on how much over-fitting is occurring within the model on the training set, size of cv set, amount of times it has been used, and how similar the model's performance was on each use. There is also sampling bias in the cv set, and that interacts with the selection process. What is generally done if you need unbiased estimate at the end of production is a train/cv/test split. The cv set is used for model selection, and once you have a single model selected, you estimate its loss - or other key metric - on the test set. It is important to use the test set minimally and not in order to select models, if you want it to be an unbiased measure. Otherwise you repeat the problem. Another approach which maintains confidence in cv-based metrics is to use k-fold cross validation. Taking taking the mean of a metric in k-fold cross validation is still biased once you have used it a few times, but the bias is reduced somewhat. You can take that idea further with nested cross-validation, which allows you to get an unbiased estimate of model performance in a general fashion (i.e. using the same hyper-parameters) from more of your data.
H: How to transition between offline and online learning? I am training an RL agent on a time series (with TensorFlow in python) in the following way: to predict the quantity of interest at time period $t$, I feed a window of $W$ observations at time periods $[t-W,t)$. Throughout the training the window advances step by step until I have a minibatch of $M$ observations and rewards to train on. Repeat until you run out of historical data, that's one Epoch. I train on a few thousand epochs with small learning rate (the loss is very unstable). Eventually, I want to start pulling live data from the environment to make predictions. At this point, if I wanted to continue the training online, how should I deal with the epochs? There is no "running out of data" anymore. AI: Although RL algorithms can be run online, in practice this is not stable when learning off policy (as in Q-learning) and with a function approximator. To avoid this, new experience can be added to history and the agent learn from the history (called experience replay). You could think of this as a semi-online approach, since new data is immediately available to learn from, but depending on batch sizes and history size, it might not be used to alter parameters for a few time steps. Typically in RL systems like DQN, you would train for some randomly sampled batch between each action, out of some window of historical data (maybe all historical data, maybe last N steps). The amount of training you perform between actions is a hyper-parameter of your model, as is any sampling bias towards newer data. For example in the Atari game-playing paper by Deep Mind team, the agent sampled a mini-batch with 32 observations (state, action, reward, next state) to train the neural network on, in between each action, whilst playing the game online. The concept of an epoch does not occur in online learning. If you are using each epoch to report performance metrics, and want to continue using comparable numbers, then you can pick some number of training iterations instead. There are no fixed rules for this, but you might want to consider reporting the same statistics on similar number of iterations as you trained on historical data - e.g. if you had 10,000 training samples in your history and are now training online with a mini-batch size of 50 per time step, then report progress every 10,000/50 = 200 time steps.
H: How to determine x and y in 2 dimensional K-means clustering? Recently, we were taught K-means clustering. I understood the basic idea of the algorithm and successfully implemented it for data with a single dimensional. Now we are told to implement it for two dimensional data. As far as I understood x and y can be two attributes of a dataset but our professor said otherwise. She said that the we have to determine x and y of an attribute in the data set to cluster the data. She used a simple 2D matrix as an example. This has got me confused. How can one determine x and y of an atttribute? Row number and column number seems silly to me to use it for distance calculation. So, my question is how does one determine x and y for 2-D k means clustering? As per this question the two attributes (weight and height) are used as x and y. Is this correct? AI: I see no problem with the example of clustering on two numerical attributes like height and weight like in the example. The only thing I can think of is somewhere lost in translation, your professor was trying to explain the concept of reducing many dimensions (attributes) to two and then clustering on those derived dimensions. That is a common technique for trying to visually spot clusters in a complex data set.
H: Method for predicting winner of call for tenders Introduction: Lately I've been looking into different machine learning methods to work around different business problems. By now I have a good, basic understanding of most regression and classification methods, and I'm able to use these methods to predict numeric values given other numeric values and/or simple categories (e.g. an employee's salary given age, years of experience and level of education) or a binary classification (e.g. will this employee leave the company based on the same variables). What I'm looking for: However, I haven't found the right method for the problem I initially wanted to solve, which involves predicting a non-numeric, non-binary value from a mix of numeric and categorical data. I'm not looking for an in-depth explanation of how to solve the exact problem, but merely advise on which techniques/methods to look into. Ideally something that could be done with R. The business problem: I have historical data on public tenders (i.e. public sector instutions buying goods/services from private contractors through calls for tenders). The data includes variables like: Orderer - i.e. who announced the tender (1 of ~150 municipalities/state insitutions) Type of procurement (1 or more of thousands of industrial classification codes) Estimated value of contract - A numeric value estimating the value of the contract (at a point before the winner is chosen). Winner - i.e. which contractor won the tender (1 of ~2000 private companies) What I want to do is predict the winner of a tender given the three other variables. It's obviously not a regression problem, and the classification methods I know seem inadequate in handling the problem too. The data are clean and streamlined (no alternate spelling of the different orderer/contractor names, etc.). Any ideas about what to look into? AI: Having the classifier try to predict one of 2000 possible values is going to be tough. A common approach is to either bucket the possible targets/labels or decompose the problem. For example: Instead of trying to predict the exact winning company, group the winners into similar groups and predict the group. For example, predict if the winner will be a large public company, small/medium business, or an independent consultant. The nature of procurement probably varies across the buyer and what is being bought. A model that is good at predicting which company will win the business of a Fortune 500 company probably has a different structure than who is good at winning the business of a small city. In a similar fashion, the competition for who is going to build a bridge is going to be different and not include the companies who will bid on implementing a new website. Partition your data into similar competitions and then try to predict the results. This will hopefully have the side effect of reducing the number of potential targets.
H: Why convolutions always use odd-numbers as filter size If we have a look to 90-99% of the papers published using a CNN (ConvNet). The vast majority of them use filter size of odd numbers:{1, 3, 5, 7} for the most used. This situation can lead to some problem: With these filter sizes, usually the convolution operation is not perfect with a padding of 2 (common padding) and some edges of the input_field get lost in the process... Question1: Why using only odd_numbers for convolutions filter sizes ? Question2: Is it actually a problem to omit a small part of the input_field during the convolution ? Why so/not ? AI: The convolution operation, simply put, is combination of element-wise product of two matrices. So long as these two matrices agree in dimensions, there shouldn't be a problem, and so I can understand the motivation behind your query. A.1. However, the intent of convolution is to encode source data matrix (entire image) in terms of a filter or kernel. More specifically, we are trying to encode the pixels in the neighborhood of anchor/source pixels. Have a look at the figure below: Typically, we consider every pixel of the source image as anchor/source pixel, but we are not constrained to do this. In fact, it is not uncommon to include a stride, where in we anchor/source pixels are separated by a specific number of pixels. Okay, so what is the source pixel? It is the anchor point at which the kernel is centered and we are encoding all the neighboring pixels, including the anchor/source pixel. Since, the kernel is symmetrically shaped (not symmetric in kernel values), there are equal number (n) of pixel on all sides (4- connectivity) of the anchor pixel. Therefore, whatever this number of pixels maybe, the length of each side of our symmetrically shaped kernel is 2*n+1 (each side of the anchor + the anchor pixel), and therefore filter/kernels are always odd sized. What if we decided to break with 'tradition' and used asymmetric kernels? You'd suffer aliasing errors, and so we don't do it. We consider the pixel to be the smallest entity, i.e. there is no sub-pixel concept here. A.2 The boundary problem is dealt with using different approaches: some ignore it, some zero pad it, some mirror reflect it. If you are not going to compute an inverse operation, i.e. deconvolution, and are not interested in perfect reconstruction of original image, then you don't care about either loss of information or injection of noise due to the boundary problem. Typically, the pooling operation (average pooling or max pooling) will remove your boundary artifacts anyway. So, feel free to ignore part of your 'input field', your pooling operation will do so for you. -- Zen of convolution: In the old-school signal processing domain, when an input signal was convolved or passed through a filter, there was no way of judging a-prior which components of the convolved/filtered response were relevant/informative and which were not. Consequently, the aim was to preserve signal components (all of it) in these transformations. These signal components are information. Some components are more informative than others. The only reason for this is that we are interested in extracting higher-level information; Information pertinent towards some semantic classes. Accordingly, those signal components that do not provide the information we are specifically interested in can be pruned out. Therefore, unlike old-school dogmas about convolution/filtering, we are free to pool/prune the convolution response as we feel like. The way we feel like doing so is to rigorously remove all data components that are not contributing towards improving our statistical model.
H: Discovering cross category sales using transactions history (Clustering?) I have a set of sales transaction data containing more than 1 item purchased. Every item sold has a category. I would like figure out which categories are most commonly ordered together. The data is more or less like this: Transaction ID|Item ID | Sales Quantity|Item Category 1 Apple 1 Fruit 1 Banana 1 Fruit 1 Carrot 2 Vegetable 2 Carrot 1 Vegetable 2 Ice Cream 2 Dessert 3 Squash 2 Vegetable 3 Chocolate 2 Dessert 4 Apple 1 Fruit 4 Carrot 1 Vegetable 4 Doughnut 1 Dessert Just eyeballing above you can see that there are a high amount of vegetable-dessert pairings on the same transaction. But now imagine that we have 250,000+ transactions in the data set and dozens of categories. I'm looking to discover cross category sales only. Not interested in Apples and Bananas (Fruit-Fruit) pairs. I think I can teach myself how to code the analysis, but I'm just not sure what this is called or what to Google. Any thoughts? AI: Note that your data can be re-ordered to look like this: Transaction ID | Items 1 {Apple, Banana, Carrot} 2 {Carrot, Ice Cream} This kind of data set is trivial for association rule mining. A very simple and well-known algorithm of this kind is the Apriori. I'm certain there are packages for executing this algorithm in R. For the restriction of "discover cross category sales only", you can just post-prune the generated rules, ie. let the algorithm generated inter-category sales and then remove those later, which should be trivial.
H: What's the math for real world back-propagation? Considering a simple ANN: $$x \rightarrow f=(U_{m\times n}x^T)^T \rightarrow g = g(f) \rightarrow h = (V_{p \times m}g^T)^T \rightarrow L = L(h,y) $$ where $x\in\mathbb{R}^n$, $U$ and $V$ are matrices, $g$ is the point-wise sigmoid function, $L$ returns a real number indicating the loss by comparing the output $h$ with target $y$, and finally $\rightarrow$ represents data flow. To minimize $L$ over $U$ and $V$ using gradient descent, we need to know $\frac{\partial L}{\partial U_{ij}}$ and $\frac{\partial L}{\partial V_{ij}}$, I know two ways to do this: do the differentiation point wise, and having a hard time figuring out how to vectorize it flatten $U$ and $V$ into a row vector, and use multivariate calculus (takes a vector, yields a vector) to do the differentiation For the purpose of tutorial or illustration, the above two methods might be suffice, but say if you really want to implement back-prop by hand in the real world, what math will you use to do the derivative? I mean, is there a branch, or method in meth, that teaches you how to take derivative of vector-valued function of matrices? AI: There is Matrix Calculus, (and I would recommend the very useful Matrix Cookbook as a bookmark to keep), but for the most part, when it comes to derivatives, it just boils down to pointwise differentiation and keeping your dimensionalities in check. You might also want to look up Autodifferentiation. This is sort of a generalisation of the Chain Rule, such that it's possible to decompose any composite function, i.e. $a(x) = f(g(x))$, and calculate the gradient of the loss with respect to $g$ as a function of the gradient of the loss with respect to $f$. This means that for every operation in your neural network, you can give it the gradient of the operation that "consumes" it, and it'll calculate its own gradient and propagate the error backwards (hence back-propagation)
H: What is the order of elements in an image in python? I have a set of images which are loaded from an h5 file. I checked their dimensions and I get (209, 64, 64, 3). read = h5py.File('datasets/train_catvnoncat.h5', 'r') read['train_set_x'].shape (209, 64, 64, 3) it means that there are 209 images but the point that I cannot understand is that, what is (64, 64, 3)? I have used the following code for plotting: import matplotlib.pyplot as plt plt.imshow(read['train_set_x'][1]) plt.show() and I get a colored image which is 64 by 64. before this, I thought for (., ., .) shapes, the second number specifies the number of lines and the third one specifies the number of rows. also the first one specifies the number of the mentioned (row and column) arrays. My question is that in numpy if you have a three dimensional array, for accessing rows and columns you have to change the second and third entries in the indexing operator; Why this is different in images and rows and columns are arranged differently in images. Shouldn't it be (3, 64, 64)? AI: The [64, 64, 3] shape you have found is a common convention to represent a colour image in (x, y, colour_channel) dimensions. The key word here is convention - there is no inherently preferred way to represent a colour image in terms of fundamental maths or computing needs, and even within Python you will find multiple conventions, varying in the ordering within the dimension - e.g. OpenCV uses (x, y, channel) convention for the shape, but has channels in order BGR - so channel 0 is blue - whilst most other libraries will use RGB ordering (ignoring for now the alternative colour spaces). My question is that in numpy if you have a three dimensional array, for accessing rows and columns you have to change the second and third entries in the indexing operator When you have a 3-dimensional array, what you decide to call "rows" and "columns" is also a convention. It depends partly on what that array represents, and there is no single way to visualise the contents.
H: Pandas how to fill missing values in one column if the values in another column are equal I have a dataframe where I need to fill in the missing values in one column (paid_date) by using the values from rows with the same value in a different column (id). There is guaranteed to be no more than 1 non-null value in the paid_date column per id value and the non-null value will always come before the null values. For example: index id paid_date 6 25220 2017-01-05 00:00:00 9 30847 None 11 30847 None 14 29369 2017-06-21 00:00:00 17 31232 2017-08-31 00:00:00 20 26196 2017-02-20 00:00:00 21 26196 None 24 28303 2017-05-09 00:00:00 25 28303 None How can I replace the None values in the paid_date column if there is a row with a paid_date with a matching id? index id paid_date 6 25220 2017-01-05 00:00:00 9 30847 None 11 30847 None 14 29369 2017-06-21 00:00:00 17 31232 2017-08-31 00:00:00 20 26196 2017-02-20 00:00:00 21 26196 2017-02-20 00:00:00 24 28303 2017-05-09 00:00:00 25 28303 2017-05-09 00:00:00 I tried using fillna with a dictionary that mapped ids to paid_dates and I tried using pd.Series.map but neither worked. paid_dates = df[pd.notnull(df['paid_date'])] pds = pd.Series(data=paid_dates['paid_date'].values, index=paid_dates['id']) pds_dict = pds.to_dict() # doesn't work df['paid_date'].fillna(value=pds_dict) # also doesn't work df['paid_date'].map(pds_dict) AI: There is guaranteed to be no more than 1 non-null value in the paid_date column per id value and the non-null value will always come before the null values. In [117]: df['paid_date'] = pd.to_datetime(df['paid_date'], errors='coerce') In [118]: df Out[118]: index id paid_date 0 6 25220 2017-01-05 1 9 30847 NaT 2 11 30847 NaT 3 14 29369 2017-06-21 4 17 31232 2017-08-31 5 20 26196 2017-02-20 6 21 26196 NaT 7 24 28303 2017-05-09 8 25 28303 NaT In [119]: df.groupby('id').ffill() Out[119]: index id paid_date 0 6 25220 2017-01-05 1 9 30847 NaT 2 11 30847 NaT 3 14 29369 2017-06-21 4 17 31232 2017-08-31 5 20 26196 2017-02-20 6 21 26196 2017-02-20 7 24 28303 2017-05-09 8 25 28303 2017-05-09 If it's not guaranteed, then we can do this: In [111]: df['paid_date'] = pd.to_datetime(df['paid_date'], errors='coerce') In [112]: df Out[112]: index id paid_date 0 6 25220 2017-01-05 1 9 30847 NaT 2 11 30847 NaT 3 14 29369 2017-06-21 4 17 31232 2017-08-31 5 20 26196 2017-02-20 6 21 26196 NaT 7 24 28303 2017-05-09 8 25 28303 NaT In [113]: df.loc[df.paid_date.isnull(), 'paid_date'] = \ df.loc[df.paid_date.isnull(), 'id'].map(df.loc[df.paid_date.notnull()] \ .set_index('id')['paid_date']) In [114]: df Out[114]: index id paid_date 0 6 25220 2017-01-05 1 9 30847 NaT 2 11 30847 NaT 3 14 29369 2017-06-21 4 17 31232 2017-08-31 5 20 26196 2017-02-20 6 21 26196 2017-02-20 7 24 28303 2017-05-09 8 25 28303 2017-05-09
H: why use a perceptron when it seems multiple IF staments do the same thing If we are to stick with an X and Y axis, the X axis being Time and the Y axis being test scores. Where more time equates to High test scores. You can use a binary classifcation algorithm to predict success. Wouldn't 2 if statements do the same thing If (time > someValue) User will probably pass Another Scenario Is I have papayas, and they have two labels, squishness and color. If the squishness is greater then 7 //on a scale of 1-10 AND the color is green Then it is a good papaya Else It isn't ripe. I dont understand the value of a perceptron in these scenarios. AI: Machine learning (perceptrons or not) is all about automatically finding generic but correct rules, be it in the form of If-Else-Rules, encoded formulas, closest occurrences, or others. The ML algorithm is just a way to (automatically) find this knowledge, whatever the representation may be. In another words, you use it to find the someValue of your example, based on your data. You don't need ML if you can represent such knowledge yourself.
H: Gradient descent with vector-valued loss My understanding of gradient descent as an optimizer for a neural network is as follows: Let $w$ be a vector of weights encoding a configuration of the network, and $l : w \mapsto \textrm{network loss}$ a function which calculates the loss over some batch of data given this configuration. Then, the weight update is $-\alpha \nabla l(w)$, where $\alpha$ is the learning rate, since the vector $\nabla l(w)$ represents the direction of greatest increase in the neighborhood around $w$. I see clearly that this works for $l(w) \in \mathbb{R}$, but am wondering how it generalizes to vector-valued loss functions, i.e. $l(w) \in \mathbb{R}^n$ for $n > 1$. AI: I see clearly that this works for $l(w) \in \mathbb{R}$, but am wondering how it generalizes to vector-valued loss functions, i.e. $l(w) \in \mathbb{R}^n$ for $n > 1$. Generally in neural network optimisers it does not*, because it is not possible to define what optimising a multi-value function means whilst keeping the values separate. If you have a multi-valued loss function, you will need to reduce it to a single value in order to optimise. When a neural network has multiple outputs, then typically the loss function that is optimised is a (possibly weighted) sum of the individual loss functions calculated from each prediction/ground truth pair in the output vector. If your loss function is naturally a vector, then you must choose some reduction of it to scalar value e.g. you can minimise the magnitude or maximise some dot-product of a vector, but you cannot "minimise a vector". * There is a useful definition of multi-objective optimisation, which effectively finds multiple sets of parameters that cannot be improved upon (for a very specific definition of optimality called Pareto optimality). I do not think it is commonly used in neural network frameworks such as TensorFlow. Instead I suspect that passing a vector loss function into TensorFlow optimiser will cause it to optimise a simple sum of vector components.
H: Improve Pandas dataframe filtering speed I have a dataset with 19 columns and about 250k rows. I have worked with bigger datasets, but this time, Pandas decided to play with my nerves. I tried to split the original dataset into 3 sub-dataframes based on some simple rules. However, it takes a long time to execute the code. About 15-20 seconds just for the filtering. Any alternative way that will improve the performance of the code? import pandas as pd #read dataset df = pd.read_csv('myData.csv') #create a dataframe with col1 10 and col2 <= 15 df1 = df[(df.col1 == 10) & (df.col2 <= 15)] df = df[~df.isin(df1)].dropna() #create a dataframe with col3 7 and col4 >= 4 df2 = df[(df.col3 == 7) & (df.col4 >= 4)] df = df[~df.isin(df2)].dropna() In the end, I have the df1, df2, df dataframes with the filtered data. AI: The concept to understand is that the conditional is actually a vector. So, you can simply define the conditions, and then combine them logically, like: condition1 = (df.col1 == 10) & (df.col2 <= 15) condition2 = (df.col3 == 7) & (df.col4 >= 4) # at this point, condition1 and condition2 are vectors of bools df1 = df[condition1] df2 = df[condition2 & ~condition1] df = df[~ (condition1 | condition2)] This will be considerable faster as it only evaluates the conditional once. Then it uses them to perform indexed lookup to create the new smaller dataframes.
H: Why large weights are prohibited in neural networks? Why weights with large values cause neural networks to be overfitted, and consequently we use approaches like regularization to neutralize weights with large values? AI: Many strategies used in machine learning are explicitly designed to reduce the test error, possibly at the expense of increased training error. Generally, regularization is any modification we make to a learning algorithm that is intended to reduce its generalization error but not its training error. The L2 parameter norm penalty commonly known as Weight decay is one of the simplest and most common kinds of regularization technique which forces the weights to become smaller, by adding a parameter norm $Ω(θ) = 1/2 || w||^{2}_2$ penalty to the objective function. For example, in linear regression, this gives us solutions that have a smaller slope, or put weight on fewer of the features. In other words, even though the model is capable of representing functions with much more complicated shape, weight decay has encouraged it to use a simpler function described by smaller coefficients. Intuitively, in the feature space, only directions along which the parameters contribute significantly to reducing the objective function are preserved relatively intact. In directions that do not contribute to reducing the objective function, movement in this direction will not significantly increase the gradient.So, Components of the weight vector corresponding to such unimportant directions are decayed away through the use of the regularization throughout training. Another simple explanation is when your weights are large, they are more sensitive to small noises in the input data. So, when a small noise is propagated through your network with large weights, it produces much different value in the output layer of the NN rather than a network with small weights. Note that weight decay is not the only regularization technique. In the past few years, some other approaches have been introduced such as Dropout, Bagging, Early Stop, and Parameter Sharing which work very well in NNs. There are other interesting findings in this very rich chapter.
H: Is it possible to implement a classifier according to quarters? What about missing data? I have a dataset with several individuals and features. I'm studying behavior over the year (for instance, averages or iterations of money gained, jobs, etc.). My ultimate goal is to implement a classifier since I have a specific feature for every person (which is equal to 0, 1, or 2). When I first tried to implement a SVM, I ended up with bad results because I did not have enough data / features: I have too many number 1 individuals and not enough 0 and 2's, so my classifier almost always put people into category 1. Therefore, I tried to increase my number of lines by separating my data into quarters (i.e JAN, FEV, MAR, then APR, MAY, JUN, then JUL, AUG, SEPT, and finally OCT, NOV, DEC) I was wondering two things: Would that be a good idea? Do I have to be cautious of a particular hypothesis that could impact my results? In case it is a good idea, I have some data available for some quarters of the year but sometimes it is missing (for instance let's imagine I don't have "Age" available for my last quarter) ; do I have to drop the feature ? Or would it be wiser to abandon the last quarter ? Or is it possible to make the classifier work despite that lack of information without actually deleting anything ? AI: Would that be a good idea? That is hard to tell from your description. It is not an immediately bad idea. If it results in a better classifier (according to cross-validation), then it has probably worked. The main things that would concern me about splitting behaviour data by quarter and treating as independent are: Your data samples will very likely be correlated when they share a person. You can work around this by careful splitting between training and cross-validation / test sets. Do not make a fully random split, but split by person - any individuals records should appear only in one of the training, cross-validation or test sets (assuming your goal is to take similar data in production from users who are not in your current database, and predict their class). There could be seasonal variation in the records that reduce the effectiveness of the split. So a "type 1" person's records in APR-JUN might look like a "type 0" person from JAN-MAR. How will you receive data in production - when you want to classify new users? If you only want to work on single-quarter data, then your new classifier is fine. If you have more data, you have to deal with your classifier maybe predicting different target variable for the same person depending on the quarter. You could combine these in some way - but if you do so, you should also do this in test to see what the impact of doing this is, which may be counter-productive (you end up with the same number of test examples as if you had not done the split). It might also be OK, perhaps it will add some regularisation. Do I have to be cautious of a particular hypothesis that could impact my results? You have to be very cautious about testing your classifier, because you could get data leakage from the cross-validation and test sets to the training set, which would make you think the classifier is generalising well when in fact it is not. The fix for this described above - split by person when deciding train/cv/test split. I have some data available for some quarters of the year but sometimes it is missing (for instance let's imagine I don't have "Age" available for my last quarter) ; do I have to drop the feature ? Handling missing data is a complicated topic in its own right, there are lots of options. You can start with: If data is missing at random (i.e. there is no reason to suspect it is related to the target variable, or only impacts certain types of record), you can substitute the mean value of that feature from the training set, or impute it based on a statistical model from the other features. If data is missing for reasons that might impact the target variable, then you should give that information to the classifier, because it might be an important feature in its own right. You can take the mean or more complex imputed value as before for the original feature, but also you should add a new boolean feature "feature X was missing". Whether or not you should use the partial data or drop it is not possible to say in general. If you are not sure, then try both and pick the version with the best cross-validation result.
H: In R's randomForest, is predict() non-deterministic? Training a random forest model is inherently non-deterministic (absent control over the random number generator), but is predict() also non-deterministic? That is, if I construct randomForest (with an odd number for ntree per the caveat in the doc) and save an .rda, will loading that .rda give me identical results given identical inputs? I tried answering this myself by looking through https://github.com/cran/randomForest , and it seems that nobody along the predict() path is calling for a random number, but I'm very new to R and rather rusty with C and may be missing something. Pointers into the code or the docs will be appreciated. AI: The model will not change unless you re-train it. The same input sample should always have the same output value for a given model. changing the seed, saving and reloading, etc should have no impact on the results. The training itself is indeed non-deterministic, predict is not. library(randomForest) # sample 80% of data to train split = sample(1:nrow(iris), floor(nrow(iris) * 0.8)) df_train = iris[split,] df_test = iris[-split,] # rf model mod = randomForest(Species ~ ., df_train) # predictions set.seed(123) res1 = predict(mod, df_test) set.seed(999) res2 = predict(mod, df_test) identical(res1, res2)
H: Matrix class not being recognized I have a data frame, and I am only interested in the numerical variables (variable class is double): Example A B C D E 0.13 2.22 3.44 3.30 6.54 3.55 4.23 0.43 5.33 3.55 ... I want to build a linear regression xgboost model, so I converted the data frame to a dense matrix. I took out all categorical variables from my data, and am left with the five variables of interest. But I can't seem to be able to use the dense matrix, despite it being a type of input that xgboost accepts. See here. I am instead getting this error: Error "Error in xgb.DMatrix(train_mat[, -n], label = train_mat[, n]) : [23:57:13] amalgamation/../dmlc-core/src/io/local_filesys.cc:66: LocalFileSystem.GetPathInfo 2007 Error:No such file or directory". Code train_mat <- as.matrix(subset(train, select = -c(Genre, Publisher, Rank, Name, Platform))) lm.boosted <- xgb.train(data = train_mat, booster= "gblinear", max_depth=3, nthread=1, nrounds = 2, eval_metric="error", eval_metric= "rmse", objective= "reg:linear") I tried to convert to a xgb.DMatrix, but I am not clear on what it's asking for to do the conversion. train_mat <- xgb.DMatrix(data = train_mat, label= train$Global_Sales) How do I input data into xgboost? AI: I got some help from the xgboost issues page on Github; thread here. Made the following change to resolve the error: train_mat <- data.matrix(subset(train, select = -c(Genre, Publisher, Rank, Name, Platform)))
H: Why exactly using a test set for model evaluation is a bad idea? I don't understand why using the test set for model evaluation is a bad idea. I completely understand why you should not use your test set to train your model (because in that case, you would be memorizing and you just cannot tell whether your model will generalize well or not if you don't have a separate test set). But why is it that simply using your test set to test (not train) your model is bad? You won't be changing any parameters of the model (because you are not training). For instance, at the end of this video, Luis says we are breaking what he calls the "Golden rule" (i.e. never use your testing data for training). However, all I can see he is doing is using the test set to verify which model performs better to then be able to make a selection on which model he will use in the end. AI: Choosing a variation of your model is a form of training. Just because you are not using gradient descent or whatever training process is core to a model class, does not mean your parameters are not influenced by this selection process. If you generated many thousands of models with random parameters and picked the best performing one on a data set, then this is also form of training. In fact, this is a valid way of optimising, called Random Search - it is somewhat inefficient for large models, but it still works. You may generate hundreds of models using the training data and using gradient descent or boosting (depending on what the training algorithm uses in your model), then select the one that performs best on cross-validation. In that case, then as well as the selection process that you intend to use this for, you are also effectively using the cv data set to fine-tune the training from the first step, using something quite similar to random search. The main benefit of having two stages to testing (cv and test sets), is that you will get an unbiased estimate of model performance from the test set. This is considered important enough that it has become standard practice.
H: Deep learning - aesthetics data modelling I want to train neural network on aesthetics. I am getting confused on how to go about for training data. Assume, I have large data set of landscapes, portraits, wildlife etc which are aesthetic according to humans. But, I want to train them for the quality, the kind of colours involved, contrast levels, background blur etc. How do I train images for this criteria? Is there a way for doing this by unsupervised learning? AI: Modeling aesthetics in media is an example of ordinal classification. One of the most actively maintained datasets for this can be found is Jen Aesthetics A relatively recent paper using deep learning towards aesthetics modeling is this Prior to deep learning era, research groups were trying to translate methods/guidelines used in the photography community to create/capture good quality pictures. There are several guidelines that you can explore with a bit of search online. One popular example is the 'rule of thirds'. Here the primary subject should not be centered in the image but offset and ideally centered at the intersection of 1/3 and 2/3 horizontal and vertical lines. This is easy to translate into an algorithm: use salient object recognition or visual attention detection and measure the distance of the center of the salient/attention patch from the 4 rule-of-thirds points. Use this off-set as a feature. The closer the salient patch is to any one of the rule-of-third points, the higher the aesthetic ordinal score for that image. This is another good paper that explores what makes images popular. Some researchers have also used the tags or descriptions of photos as features. The objective here is to learn an association between lexical features and image aesthetics. They have sourced their data from online repositories like Digital Photography Challenge. This subjective task is needless to say very complex. If you plan to address it, I'd recommend beginning with a clear definition of the context within which you aim to address aesthetics. Ideally, you'd like to map any given image (media) to some value in [0, ..., 1] $\in \mathcal{R}$. However, this is very difficult unless you have access to a lot of training data. I suggest trying instead to simplify the problem. If you can reliably map images to just two classes, good aesthetics and bad aesthetics. You can successively generalize from binary classification to full fledged multi-class ordinal classification, for which you'll very likely have to keep increasing the depth of your CNN. Good luck! Since, there is more to aesthetics than meets the eye! :-)
H: Do xgboost and random forests in general handle multiple splits of the same numeric feature in a single branch? For example, let's say that the age (say $x$) of a person for $12< x< 25$ can be used to predict computer usage to a high degree of certainty. In a decision tree, this could be represented by a split of $x<25$ followed by a split of $x>12$. Does xgboost, or other decision tree learning algorithms in general, handle multiple splits within the same branch? AI: Yes, it can handle multiple splits withing the same branch. A decision tree model can use the same feature as many times as optimally needed. You can see an example in the sklearn doc. The petal-length feature is used multiple times in the same branch.
H: question about `sklearn.ensemble.BaggingClassifier` I am experimenting with BagginClassifier, but I fail to get the expected functionality. Basically, the BagginClassifier should draw (bootstrapping) a new data set with replacement. For example: the following code should generate a new bootstrapped sample of the same size as the original data set: import sklearn.datasets as ds import numpy as np X, y = ds.load_iris().data, ds.load_iris().target bag = BaggingClassifier(base_estimator=LogisticRegression(), n_estimators=100, max_samples=1.0, bootstrap=True, n_jobs=1) bag.fit(X, y) print X[bag.estimators_samples_[0]].shape >> 95 (or any other number close to 95). Naively, I would expect to get the bootstrapped sample of the same size as the original one (150), but with some random repetition of rows. However, I get a smaller sample size with unique rows. That's strange. What's wrong here? AI: Found the answer hiding in lines 93-100 in the bagging.py file. Here is what I understand - the bootstrapping process works in three steps: Calculate number of samples to train each estimator on (the max_samples variable in the bagging.py code). In your case its $1.0 * x.shape[0] = 150$. Select with repetition the needed max samples(as calculated in the previous step). The selection is done by using randint function, and it generates an array of the x series indices. A given index can appear in this array more than once. In order to account for indices(samples) that were selected more than one, a weights vector is passed into the base estimator fit function. So, for example, a sample that was selected twice will have $weight * 2$ and it will have the desired impact on the fitting algorithm. As to my understanding, one can use the estimators_samples_ only to find out which samples were included and not how many times each of them did.
H: Should I use GPU or CPU for inference? I'm running a deep learning neural network that has been trained by a GPU. I now want to deploy this to multiple hosts for inference. The question is what are the conditions to decide whether I should use GPU's or CPUs for inference? Adding more details from comments below. I'm new to this so guidance is appreciated. Memory: GPU is K80 Framework: Cuda and cuDNN Data size per workloads: 20G Computing nodes to consume: one per job, although would like to consider a scale option Cost: I can afford a GPU option if the reasons make sense Deployment: Running on own hosted bare metal servers, not in the cloud. Right now I'm running on CPU simply because the application runs ok. But outside of that reason, I'm unsure why one would even consider GPU. AI: It is true that for training a lot of the parallalization can be exploited by the GPU's, resulting in much faster training. For Inference, this parallalization can be way less, however CNN's will still get an advantage from this resulting in faster inference. Now you just have to ask yourself: is faster inference important? Do I want this extra dependencies (a good GPU, the right files installed etc)? If speed is not an issue, go for CPU. However note that GPU's can make it an order of magnitude faster in my experience.
H: Do models without parameters exist? I am reading "A Course in Machine Learning" and, in chapter 2, the author says: "For most models, there will be associated parameters. These are the things that we use the data to decide on. Parameters in a decision tree include: the specific questions we asked, the order in which we asked them, and the classification decisions at the leaves." My question is about the first sentence. Is there any model in machine learning that does not have parameters? I can't think of any. For sure there are models without hyperparameters (for instance, the linear model does not contain any hyperparameter, but it still contains 2 parameters, the slope and the y-intercept). If such parameterless models exist, what are their purpose then? Isn't it the whole point of training to tune a model's parameters? AI: Is there any model in machine learning that does not have parameters? Yes. k-nearest neighbors is parameterless (there is only a single hyper-parameter $k$). If such parameterless models exist, what are their purpose then? Isn't it the whole point of training to tune a model's parameters? Exactly: such models require no training at all. k-NN in particular relies on knowing the data set upon prediction. Anything close to "training" this model would be pushing data points to a set, but these do not count as parameters.
H: Text generation using Tensor Factorization Text generation is well studied using Markov chains or NNs, but I am not aware of any works to word sequence prediction in terms of subspace learning. Treating phrases or sentences as temporal data such as time series, it is possible to represent word sequences as a tensor $T = WS \times W \times K$, where $WS$ is the set of word sequences present in the corpus, $W$ represents the set of segmented words, and $K$ is the maximum length of observed sequences For instance, for a phrase, ws = word sequence prediction, then $T(ws, ``sequence", 2) = 1$ For an incomplete tensor, where entries s.t. prediction are missing, the reconstructed tensor after decomposition can then be used to generate texts, in terms of the observed word space. My questions are as follows: 1) Is there any works using tensor factorization or factorization machines for word sequence generation? 2) How subspace learning models differ from those generative models, such as Recurrent Neural Networks or Belief Networks? What are the downsides of using subspace methods as compared to other established methods? 2) How to establish the threshold for the length of the predicted sequence? For example, can one look at the $WS_r \times K_r$ space, and use cross-validation to find the threshold for each word sequences? Any pointers or answers to any of the above questions is highly appreciated. AI: Tensor Factorization would not work for text generation as a stand-alone technique. There is no way for the decomposition to model long-term dependencies in language. Without modeling long-term language dependencies, its results would be similar to low-order Markov chains. Tensor Factorization could be used as another signal in a larger natural language generation system, for example improving word-embeddings.
H: Correlating activity between entities using Python I have some data that contains IDs along with an associated column containing times. What I want to do is to be able to determine which IDs are most similar, based upon the times e.g. Which IDs have correlated times for performing an action The data looks something like this; ID Time(secs) AAAA 1 AAAA 6 AAAA 5 AAAA 2 AAAA 4 BBBB 2 BBBB 4 BBBB 6 BBBB 3 CCCC 3 CCCC 4 CCCC 1 CCCC 6 DDDD 7 DDDD 4 DDDD 5 DDDD 3 Naively I initially thought this would be a simple case of plotting the values and then calculating the Correlation Coefficient but I soon realised this isn't possible; ID CorrCoef AAAA>BBBB ???? AAAA>CCCC AAAA>DDDD BBBB>CCCC BBBB>DDDD CCCC>DDDD I am now thinking I need to be able to compare 2 populations using something like a t-statistic or perhaps by using something like an AutoRegression. It is fair to say I am now feeling a little bit out of my depth with something that seemed quite basic initially. Does anyone have any pointers as to the best way of doing this? Thanks in advance! Edit Based upon a suggestion below, it seems a KS test could be useful here. What I need to do therefore is subset the data so that it can be fed into the below line of code scipy.stats.ks_2samp(data1, data2) AI: You may compute the $id$ co-occurrences frequencies in a given time-window. Suppose (without loss of generality) your criteria for co-occurrence is that both $id$s must occur on the second t, than using maximum likelihood estimate $P(id_{i}|id_{j})$ is: $P(id_{i}|id_{j}) = \frac{count(id_{i}, id_{j})}{count(id_{j})}$ and the maximum likelihood estimate for the joint probability is: $P(id_{i}, id_{j}) = \frac{count(id_{i}, id_{j})}{\sum{k \in IDs}{} \ count(id_{k})}$ Where $id_j, id_i \in IDs\ $ and $IDs$ is the set containing all $id$s (AAAA,BBB,CCCC,...), You can then calculate the pointwise mutual information between each $id$ pair, that is, how ofter two $id$s co-occur, compared with what we would expect if they were independent: $I(id_i, id_j) = \log_{2}{\frac{P(id_i, id_j)}{P(id_i)P(id_j)}}$ Given you a estimation of how strong is the association between $id_i$ and $id_j$. The same strategy may be used to find similarities, you may think about each $id_i$ being a $|IDs|$-dimensional vector with the co-occurrences frequencies being the values. You can then apply cosine-similarity or pearson-correlation to find the most similar vectors ($id$s). EDIT Complementing my answer follows python demonstrating the ideas above for the sample dataset given in the question. First we create our dataframe from the data in the question import pandas as pd import numpy as np import collections d = {'ID': ['AAAA', 'AAAA', 'AAAA', 'AAAA', 'AAAA', 'BBBB', 'BBBB', 'BBBB', 'BBBB', 'CCCC', 'CCCC', 'CCCC', 'CCCC','DDDD', 'DDDD', 'DDDD', 'DDDD'], 'Time': [1, 6, 5, 2, 4, 2, 4, 6, 3, 3, 4, 1, 6, 7, 4, 5, 3]} df = pd.DataFrame(d) Computes the co-occurrences dfm = df.merge(df, on='Time') dfm = dfm[dfm.ID_x != dfm.ID_y] # ID_x and ID_y are created by the merge df_M = pd.get_dummies(dfm.ID_x).groupby(dfm.ID_y).apply(sum) print(df_M) The dataframe df_M represents the co-occurrence matrix AAAA BBBB CCCC DDDD ID_y AAAA 0 3 3 2 BBBB 3 0 3 2 CCCC 3 3 0 2 DDDD 2 2 2 0 Pairwise Mutual Information Let N be the total amount of co-occurrences, than I can compute every joint probability simply dividing the co-occurrences in the df_M matrix by N. N = df_M.sum().sum() df_joint_P = df_M/N # Computes every joint probability P(id_i, id_j) The probability of each $id$ is the sum of all its joint probabilities df_ID_P = df_joint_P.sum(axis=0) # Marginalizes to produces P(id_i) Now that we have all we need to compute a PMI dataframe idx = [(r, c) for r in list(df_M) for c in list(df_M)] pmi_dict = collections.defaultdict(dict) for r,c in idx: pmi_dict[r][c] = np.log2(2*df_joint_P[r][c]/(df_ID_P[r] * df_ID_P[c])) if df_joint_P[r][c] > 0 else 0 pmi_df = pd.DataFrame(pmi_dict) print(pmi_df) The pairwise mutual information between every pair of different $id$s is different than zero and like bellow AAAA BBBB CCCC DDDD AAAA 0.000000 1.491853 1.491853 1.321928 BBBB 1.491853 0.000000 1.491853 1.321928 CCCC 1.491853 1.491853 0.000000 1.321928 DDDD 1.321928 1.321928 1.321928 0.000000 We can see that every $id$ seems to be almost equally associated with each other. Cosine Similarity Now we can compute the cosine similarity, remembering that it is the $l_2$-normalized dot-product df_NM = df_M.div(df_M.pow(2).sum(axis=1).pow(0.5), axis=0) df_cos = df_NM.dot(df_NM.T) print(df_cos) The cosines are AAAA BBBB CCCC DDDD AAAA 1.000000 0.590909 0.590909 0.738549 BBBB 0.590909 1.000000 0.590909 0.738549 CCCC 0.590909 0.590909 1.000000 0.738549 DDDD 0.738549 0.738549 0.738549 1.000000 Obviously every $id$ is (trivially) most similar to itself but from the data above we see that every $id$ is similar to all the others (confirming the result we found using PMI) but we may notice that $id$ DDDD is more similar to all the other $id$s (at least for this tiny example).