Id
stringlengths
2
6
PostTypeId
stringclasses
1 value
AcceptedAnswerId
stringlengths
2
6
ParentId
stringclasses
0 values
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
34
27.1k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
1 value
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
2
6
OwnerUserId
stringlengths
2
6
Tags
listlengths
1
5
Answer
stringlengths
32
27.2k
SimilarQuestion
stringlengths
15
150
SimilarQuestionAnswer
stringlengths
44
22.3k
110651
1
112428
null
2
142
If MLE (Maximum Likelihood Estimation) cannot give a proper closed-form solution for the parameters in Logistic Regression, why is this method discussed so much? Why not just stick to Gradient Descent for estimating parameters?
Why should MLE be considered in Logistic Regression when it cannot give a definite solution?
CC BY-SA 4.0
null
2022-05-04T17:57:24.917
2022-07-05T18:43:10.603
null
null
125747
[ "regression", "logistic-regression", "parameter-estimation" ]
Maximum likelihood is a method for estimating parameters. Gradient descent is a numerical technique to help us solve equations that we might not be able to solve by traditional means (e.g., we can't get a closed-form solution when we take the derivative and set it equal to zero). The two can coexist. In fact, when we use gradient descent to minimize the crossentropy loss in a logistic regression, we are solving for a maximum likelihood estimator of the regression parameters, as minimizing crossentropy loss and maximizing likelihood are equivalent in logistic regression. In order to descend a gradient, you have to have a function. If we take the negative log-likelihood and descend the gradient until we find the minimum, we have done the equivalent of finding the maximum of the log-likelihood and, thus, the likelihood.
In ML why selecting the best variables?
You are right. If someone is using regularization correctly and doing hyperparameter tuning to avoid overfitting, then it should not be a problem theoretically (ie multi-collinearity will not reduce model performance). However, it may matter in a number of practical circumstances. Here are two examples: - You want to limit the amount of data you need to store in a database for a model that you are frequently running, and it can be expensive storage-wise, and computation-wise to keep variables that don't contribute to model performance. Therefore, I would argue that although computing resources are not 'scarce' they are still monetarily expensive, and using extra resources if there is a way to limit them is also a time sink. - For interpretation's sake, it is easier to understand the model if you limit the number of variables. Especially if you need to show stakeholders (if you work as a data scientist) and need to explain model performance.
110670
1
110988
null
0
397
i'm working on Arabic Speech Recognition using Wav2Vec XLSR model. While fine-tuning the model it gives the error shown in the picture below. i can't understand what's the problem with librosa it's already installed !!! [](https://i.stack.imgur.com/4D0yi.png) [](https://i.stack.imgur.com/8c0pN.png)
NameError: name 'librosa' is not defined
CC BY-SA 4.0
null
2022-05-05T11:33:02.463
2022-05-16T13:17:43.677
null
null
135374
[ "deep-learning", "data-science-model", "anaconda", "speech-to-text", "library" ]
problem solved by creating a new virtual env and installing all the packages using pip install instead of conda
NameError 'np' is not defined after importing np_utils
``` Y_train = np_utils.to_categorical(y_train, n_classes) Y_test = np_utils.to_categorical(y_test, n_classes) ``` please update code line as `np_utils` not `np.utils`
110683
1
110768
null
1
84
could someone tell me how to interpret the following graph?[](https://i.stack.imgur.com/k8WIr.png) It corresponds to a graph in which the effects of the variables in a linear regression are observed, but its interpretation is not clear to me. Why in working day only half a graph is shown? Why doesn't weathersit have whiskers? Why holiday is simply a line at 0? Here is a brief summary of the variables: workingday : if day is neither weekend nor holiday is 1, otherwise is 0. windspeed: Normalized wind speed. The values are divided to 67 (max) weathersit : - 1: Clear, Few clouds, Partly cloudy, Partly cloudy - 2: Mist + Cloudy, Mist + Broken clouds, Mist + Few clouds, Mist - 3: Light Snow, Light Rain + Thunderstorm + Scattered clouds, Light Rain + Scattered clouds - 4: Heavy Rain + Ice Pallets + Thunderstorm + Mist, Snow + Fog temp : Normalized temperature in Celsius. The values are derived via (t-t_min)/(t_max-t_min), t_min=-8, t_max=+39 (only in hourly scale) season : season (1:winter, 2:spring, 3:summer, 4:fall) hum: Normalized humidity. The values are divided to 100 (max) holiday : weather day is holiday or not
How to interpret a linear regression effects graph?
CC BY-SA 4.0
null
2022-05-05T15:07:13.007
2022-05-08T17:00:25.930
2022-05-06T09:22:28.303
119136
119136
[ "machine-learning", "regression", "linear-regression", "interpretation" ]
Note: you didn't mention what this is for, i.e. the target variable that this model is supposed to predict. Anyway this graph shows for each independent variable (feature) its effect on predicting the dependent variable (target). A high absolute value (positive or negative) means that the feature actually helps knowing the target to some extent, whereas a value close to zero means that the feature doesn't help at all (or very little). For example "holiday" brings zero information for knowing the target. For every feature a range of values is shown as a boxplot, it's normal that these can have different shapes. A narrow boxplot like "working day" indicates very little variance (i.e. little uncertainty). The thick line in the middle is usually the median. The whiskers show how far outlier values reach, sometimes there's no outlier.
Regression: how to interpret different linear relations?
What you are looking for is the Analysis of Covariance ([ANCOVA](https://en.wikipedia.org/wiki/Analysis_of_covariance)) analysis, which is used to compare two or more regression lines by testing the effect of a categorical factor on a dependent variable (y-var) while controlling for the effect of a continuous co-variable (x-var). [Here](http://r-eco-evo.blogspot.in/2011/08/comparing-two-regression-slopes-by.html) is an example for carrying out the ANCOVA analysis using R.
110718
1
110720
null
6
1333
So, say I have the following sentences ["The dog says woof", "a king leads the country", "an apple is red"] I can embed each word using an `N` dimensional vector, and represent each sentence as either the sum or mean of all the words in the sentence (e.g `Word2Vec`). When we represent the words as vectors we can do something like `vector(king)-vector(man)+vector(woman) = vector(queen)` which then combines the different "meanings" of each vector and create a new, where the mean would place us in somewhat "the middle of all words". Are there any difference between using the sum/mean when we want to compare similarity of sentences, or does it simply depend on the data, the task etc. of which performs better?
Sum vs mean of word-embeddings for sentence similarity
CC BY-SA 4.0
null
2022-05-06T13:23:31.203
2022-05-06T13:56:06.420
null
null
104872
[ "nlp", "word-embeddings", "word2vec" ]
# TL;DR You are better off averaging the vectors. # Average vs sum Averaging the word vectors is a pretty known approach to get sentence level vectors. Some people may even call that "Sentence2Vec". Doing this, can give you a pretty good dimension space. If you have multiple sentences like that, you can even calculate their similarity with a cosine distance. If you sum the values, you are not guaranteed to have the sentence vectors in the same magnitude in the vector space. Sentences that have many words will have very high values, where as sentences with few words with have low values. I cannot think of a use-case where this outcome is desirable since the semantical value of the embeddings will be very much dependand on the lenght of the sentence, but there may be sentences that are long with a very similar meaning of a short sentence. ## Example ``` Sentence 1 = "I love dogs." Sentence 2 = "My favourite animal in the whole wide world are men's best friend, dogs!" ``` Since you may want these two sentence above to fall closely in the vector space, you need to average the word embeddings. # Doc2Vec Another approach is to use [Doc2Vec](https://arxiv.org/pdf/1405.4053v2.pdf) which doesn't average word embeddings, but rather treats full sentences (or paragraphs) as a single entity and therefore a single embeddings for it is created.
How word embedding work for word similarity?
I think you have it mostly correct. Word embeddings can be summed up by: A word is known by the company it keeps. You either predict the word given the context or vice versa. In either case similarity of word vectors is similarity in terms of replaceability. i.e. if two words are similar one could replace the other in the same context. Note that this means that "hot" and "cold" are (or might be) similar within this context. If you want to use word embeddings for a similarity measure of tweets there are a couple approaches you can take. One is to compute paragraph vectors (AKA doc2vec) on the corpus, treating each tweet as a separate document. (There are good examples of running doc2vec on Gensim on the web.) An alternate approach is to AVERAGE the individual word vectors from within each tweet, thus representing each document as an average of its word2vec vectors. There are a number of other issues involved in optimizing similarity on tweet text (normalizing text, etc) but that is a different topic.
110752
1
110775
null
0
54
I'm having a hard time wrapping my head around the idea that Linear Models can use polynomial terms to fit curve with Linear Regression. [As seen here](https://statisticsbyjim.com/regression/curve-fitting-linear-nonlinear-regression/). Assuming I haven't miss-understood the above statement, are you able to achieve good performance with a linear model, trying to fit, say a parabola in a 3d space?
Linear models to deal with non-linear problems
CC BY-SA 4.0
null
2022-05-07T22:47:52.643
2022-05-08T22:22:36.880
null
null
135359
[ "machine-learning" ]
Actually, a model such as $Y = b_0 + b_1X + b_2X^2$ is not a 3D parabola, but a 2D parabola. There are only two variables ($Y$ and $X$), in other words, the function is still $Y = f(X)$. A 3D parabola would be a paraboloid, thus the model would be $Y = b_0 + b_1X + b_2X^2 + b_3Z + b_4Z^2$, and a function of the type $Y = f(X,Z)$. Alternatively, there are mixed models such as $Y = b_0 + b_1X + b_2X^2 + b_3Z$ , which would be a Parabolic Cylinder in 3D and also $Y = f(X,Z)$. As @Evator mentioned, all these models are in fact linear models, where the term linear is referred to the coefficients, not the variables. Thus, linearity in variables is different from linearity in coefficients. These models fit quite well but sometimes there is a cost: multicollinearity and increased variance.
Which statistical analysis to use when you assume non-linear model but not-specified?
You will need to know something about the metadata. Bear in mind that traditional k-means cluster analysis will work only for continuous variables. There are also other clustering models such as hierarchical cluster, where a knowledge of the metadata is very important. I would also strongly suggest that you look into GLM models, which do handle non-linear model such as normal, exponential, gamma, Poisson, Bernoulli, binomial, multinomial, and negative binomial. So if you know a little bit about the data types, you can at least learn something about the design, without having access to the data.
110798
1
110800
null
0
47
I've been using the DBScan implementation of python from `sklearn.cluster`. The problem is, that I'm working with 360° lidar data which means, that my data is a ring like structure. To illustrate my problem take a look at this picture. The colours of the points are the groups assigned by DBScan (please ignore the crosses, they dont have anything to do with the task). In the picture I have circled two groups which should be considered the same group, as there is no distance between them (after 2pi it repeats again obviously...) [](https://i.stack.imgur.com/5UDZo.png) Someone has an idea? Of course I could implement my own version of DB-Scan but my question is, if there is a way to use `sklearn.cluster.dbscan` with ring like structures.
DB-Scan with ring like data
CC BY-SA 4.0
null
2022-05-09T14:20:50.157
2022-05-09T14:51:57.333
null
null
135525
[ "scikit-learn", "clustering", "dbscan" ]
This solved my problem: [https://stackoverflow.com/questions/48767965/dbscan-with-custom-metric](https://stackoverflow.com/questions/48767965/dbscan-with-custom-metric) This is the formula I used for my distance, with `n = 2*pi`: [https://math.stackexchange.com/a/1149125](https://math.stackexchange.com/a/1149125)
How can we evaluate DBSCAN parameters?
[OPTICS](http://www.dbs.informatik.uni-muenchen.de/Publikationen/Papers/OPTICS.pdf) gets rid of $\varepsilon$, you might want to have a look at it. Especially the reachability plot is a way to visualize what good choices of $\varepsilon$ in DBSCAN might be. Wikipedia ([article](https://en.wikipedia.org/wiki/OPTICS_algorithm)) illustrates it pretty well. The image on the top left shows the data points, the image on the bottom left is the reachability plot: [](https://i.stack.imgur.com/TamFM.png) The $y$-axis are different values for $\varepsilon$, the valleys are the clusters. Each "bar" is for a single point, where the height of the bar is the minimal distance to the already printed points.
110812
1
110897
null
1
240
I have been exploring clustering algorithms (K-Means, K-Medoids, Ward Agglomerative, Gaussian Mixture Modeling, BIRCH, DBSCAN, OPTICS, Common Nearest-Neighbour Clustering) with multidimensional data. I believe that the clusters in my data occur across different subsets of the features rather than occurring across all features, and I believe that this impacts the performance of the clustering algorithms. To illustrate, below is Python code for a simulated dataset: ``` ## Simulate a dataset. import numpy as np, matplotlib.pyplot as plt from sklearn.cluster import KMeans np.random.seed(20220509) # Simulate three clusters along 1 dimension. X_1_1 = np.random.normal(size = (1000, 1)) * 0.10 + 1 X_1_2 = np.random.normal(size = (2000, 1)) * 0.10 + 2 X_1_3 = np.random.normal(size = (3000, 1)) * 0.10 + 3 # Simulate three clusters along 2 dimensions. X_2_1 = np.random.normal(size = (1000, 2)) * 0.10 + [4, 5] X_2_2 = np.random.normal(size = (2000, 2)) * 0.10 + [6, 7] X_2_3 = np.random.normal(size = (3000, 2)) * 0.10 + [8, 9] # Combine into a single dataset. X_1 = np.concatenate((X_1_1, X_1_2, X_1_3), axis = 0) X_2 = np.concatenate((X_2_1, X_2_2, X_2_3), axis = 0) X = np.concatenate((X_1, X_2), axis = 1) print(X.shape) ``` Visualize the clusters along dimension 1: ``` plt.scatter(X[:, 0], X[:, 0]) ``` [](https://i.stack.imgur.com/2L93e.png) Visualize the clusters along dimensions 2 and 3: ``` plt.scatter(X[:, 1], X[:, 2]) ``` [](https://i.stack.imgur.com/b1mag.png) K-Means with all 3 Dimensions ``` K = KMeans(n_clusters = 6, algorithm = 'full', random_state = 20220509).fit_predict(X) + 1 ``` Visualize the K-Means clusters along dimension 1: ``` plt.scatter(X[:, 0], X[:, 0], c = K) ``` [](https://i.stack.imgur.com/iGwXR.png) Visualize the K-Means clusters along dimensions 2 and 3: ``` plt.scatter(X[:, 1], X[:, 2], c = K) ``` [](https://i.stack.imgur.com/bqjYf.png) The K-Means clusters developed with all 3 dimensions are incorrect. K-Means with Dimension 1 Alone ``` K_1 = KMeans(n_clusters = 3, algorithm = 'full', random_state = 20220509).fit_predict(X[:, 0].reshape(-1, 1)) + 1 ``` Visualize the K-Means clusters along dimension 1: ``` plt.scatter(X[:, 0], X[:, 0], c = K_1) ``` [](https://i.stack.imgur.com/RnljA.png) The K-Means clusters developed with dimension 1 alone are correct. K-Means with Dimensions 2 and 3 Alone ``` K_2 = KMeans(n_clusters = 3, algorithm = 'full', random_state = 20220509).fit_predict(X[:, [1, 2]]) + 1 ``` Visualize the K-Means clusters along dimensions 2 and 3: ``` plt.scatter(X[:, 1], X[:, 2], c = K_2) ``` [](https://i.stack.imgur.com/6skXZ.png) The K-Means clusters developed with dimensions 2 and 3 alone are correct. Clustering Between Dimensions Although I did not intend for dimension 1 to form clusters with dimensions 2 or 3, it appears that clusters between dimensions emerge. Perhaps this might be part of why the K-Means algorithm struggles when developed with all 3 dimensions. Visualize the clusters between dimension 1 and 2: ``` plt.scatter(X[:, 0], X[:, 1]) ``` [](https://i.stack.imgur.com/q0al3.png) Visualize the clusters between dimension 1 and 3: ``` plt.scatter(X[:, 0], X[:, 2]) ``` [](https://i.stack.imgur.com/YEIbu.png) Questions - Am I making a conceptual error somewhere? If so, please describe or point me to a resource. If not: - If I did not intend for dimension 1 to form clusters with dimensions 2 or 3, why do clusters between those dimensions emerge? Will this occur with higher-dimensional clusters? Is this why the K-Means algorithm struggles when developed with all 3 dimensions? - How can I select the different subsets of the features where different clusters occur (3 clusters along dimension 1 alone, and 3 clusters along dimensions 2 and 3 alone, in the example above)? My hope is that developing clusters separately with the right subsets of features will be more robust than developing clusters with all features. Thank you very much! UPDATE: Thank you for the very helpful answers for feature selection and cluster metrics. I have asked a more specific question: [Why Do a Set of 3 Clusters Across 1 Dimension and a Set of 3 Clusters Across 2 Dimensions Form 9 Apparent Clusters in 3 Dimensions?](https://datascience.stackexchange.com/questions/111047/why-do-a-set-of-3-clusters-across-1-dimension-and-a-set-of-3-clusters-across-2-d)
How To Develop Cluster Models Where the Clusters Occur Along Subsets of Dimensions in Multidimensional Data?
CC BY-SA 4.0
null
2022-05-09T21:13:17.063
2022-05-17T21:05:46.860
2022-05-17T21:05:46.860
58488
58488
[ "python", "clustering", "feature-selection", "k-means" ]
The field of feature selection for clustering studies this topic. A specific algorithm for feature selection for clustering is Spectral Feature Selection (SPEC) which estimates the feature relevance by estimating feature consistency within the spectrum matrix of the similarity matrix. The features consistent with the graph structure will have similar values to instances that are near to each other in the graph. These features should be more relevant since they behave similarly in each similar group of samples, aka clusters. "[Feature Selection for Clustering: A Review](https://www.taylorfrancis.com/chapters/edit/10.1201/9781315373515-2/feature-selection-clustering-review-salem-alelyani-jiliang-tang-huan-liu)" by Alelyani et al. goes into greater detail. There is an also an [Feature Selection for Clustering Python package](https://github.com/danilkolikov/fsfc).
Clustering data set with multiple dimensions
welcome to the community. There are many criteria on the basis of which you can cluster the recipes. The usual way to do this is to represent recipes in terms of vectors, so each of your 91 recipes can be represented by vectors of 40 dimensions. This means that now the system or machine will identify your recipes as vectors in a 40-dimensional space. Now, to check the "similarity" between the recipes you have two of the most common metrics, one is the euclidean distance. Check it out:- [https://en.wikipedia.org/wiki/Euclidean_distance](https://en.wikipedia.org/wiki/Euclidean_distance) The other is the cosine similarity. Check it out:- [https://en.wikipedia.org/wiki/Cosine_similarity](https://en.wikipedia.org/wiki/Cosine_similarity) Coming back to how to cluster the data, you can use KMeans, it is an unsupervised algorithm. The only thing you need to input here is how many clusters you want. Scikit-Learn in Python has a very good implementation of KMeans. [Visit this link](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.html#sklearn.cluster.KMeans). However, there are two conditions:- 1) As said before, it needs the number of clusters as an input. 2) It is a Euclidean distance-based algorithm and NOT a cosine similarity-based. A better alternative to this is Hierarchical clustering. It creates the clusters in a top-down approach(divisive) or bottom-up approach(agglomerative) recursively. [Read about it here](https://www.analyticsvidhya.com/blog/2019/05/beginners-guide-hierarchical-clustering/). It is better than KMeans in two ways:- 1) You have some flexibility on how to cut the recursion to obtain the clusters on the basis of number of clusters you want like KMeans or on the basis of the distance between cluster representatives. 2) You can also choose among various similarity criteria or affinity, like euclidean distance, cosine similarity, etc. Hope this helps, Thanks.
110817
1
110857
null
0
113
I have a data set which, no matter how I tune t-SNE, won't end in clearly separate clusters or even patterns and structures. Ultimately, it results in arbitrary distributed data points all over the plot with some more data points of the one class there and some of another one somewhere else. [](https://i.stack.imgur.com/TRWPJ.png) Is it up to t-SNE, me and/or the data? I'm using ``` Rtsne(df_tsne , perplexity = 25 , max_iter = 1000000 , eta = 10 , check_duplicates = FALSE) ```
Does t-SNE have to result in clear clusters / structures?
CC BY-SA 4.0
null
2022-05-10T04:33:09.723
2022-05-11T12:16:58.527
null
null
71246
[ "r", "clustering", "tsne" ]
No, T-SNE does not have to result in clear clusters. It is a low dimension visualization of high dimension data. So, if you data points are well clustered in low dimension, it means that they can be classified in lower dimension. The idea behind T-SNE is to calculate probability of data points. Points far from each other have low probability. I would suggest to have a look at this link once, [https://towardsdatascience.com/t-distributed-stochastic-neighbor-embedding-t-sne-bb60ff109561](https://towardsdatascience.com/t-distributed-stochastic-neighbor-embedding-t-sne-bb60ff109561)
What does it mean by “t-SNE retains the structure of the data”?
You should break this down one step further: retaining local structure and retaining global structure. --- - Other well-understood methods, such as Principal Component Analysis are great at retaining global structure, because it looks at ways in which a dataset's variance is retained, globally, across the entire dataset. - t-SNE works differently, by looking at locally appearing datapoints. It does this by computing a metric between each datapoint and a given number of neighbours - modelling them as being within a t-distributed distribution (hence the name: t-distributed Stochastic Neighbourhood Embedding). It then tries to find an embedding, such that neighbours in the original n-dimensional space, are also found close together in the reduced (embedded) dimensional space. It does this by minimising the KL-divergence between the before and after datapoint distributions, $\mathbb{P}$ and $\mathbb{Q}$ respectively. This method has the benefit of retaining local structure - so clusters in the low dimensional space should be interpretable as datapoints that were also very similar in the high dimensional space. t-SNE works remarkedly well on many problem, however there are a few things to watch out for: - Because we know have some useful local structure retained, we essentially trade that off for ability in retaining global structure. This equates to you not being able to really compare e.g. 3 clusters in the final embedding, where 2 are close together and 1 is far away. This does not mean they were also far away from each other in the original space. - t-SNE can be very sensitive to its perplexity parameter. In fact, you might get different results with the three-cluster example in point 1, using an only slightly different perplexity value. This value can indeed be roughly equated to "how many points shall we inlude in the t-distribution to find neighbours of a datapoint" - it essentially gives the area which is encompassed in the t-distribution. --- I would recommend [watching this lecture](https://www.youtube.com/watch?v=RJVL80Gg3lA) by the author of t-SNE, Laurens van der Maaten, as well as getting some intuition for t-SNE and it's parameters using [this great visual explanation](https://distill.pub/2016/misread-tsne/). There are also some good answers [here on CrossValidated](https://stats.stackexchange.com/questions/238538/are-there-cases-where-pca-is-more-suitable-than-t-sne) with a little more technical information.
110834
1
110859
null
1
52
I have a multivariate time series of weather date: temperature, humidity and wind strength ($x_{c,t},y_{c,t},z_{c,t}$ respectively). I have this data for a dozen different cities ($c\in {c_1,c_2,...,c_{12}}$). I also know the values of certain fixed attributes for each city. For example, altitude ($A$), latitude $(L)$ and distance from ocean ($D$) are fixed for each city (i.e. they are time independent). Let $p_c=(A_c,L_c,D_c)$ be this fixed parameter vector for city $c$. I have built a LSTM in Keras [(based on this post)](https://machinelearningmastery.com/how-to-develop-lstm-models-for-time-series-forecasting/) to predict the time series from some initial starting point, but this does not make use of $p_c$ (it just looks at the time series values). My question is: Can the fixed parameter vector $p_c$ be taken into account when designing/training my network? The purpose of this is essentially: (1) train a LSTM on all data from all cities, then (2) forecast the weather time series for a new city, with known $A_{new},L_{new},D_{new}$ values (but no other data - i.e. no weather history for this city). (A structure different from LSTM is fine, if that's more suited.)
RNN/LSTM timeseries, with fixed attributes per run
CC BY-SA 4.0
null
2022-05-10T15:08:32.377
2022-05-11T13:14:52.010
2022-05-10T15:58:31.563
135530
135530
[ "neural-network", "keras", "time-series", "lstm", "rnn" ]
You can create a sort of encoder-decoder network with two different inputs. ``` latent_dim = 16 # First branch of the net is an lstm which finds an embedding for the (x,y,z) inputs xyz_inputs = tf.keras.Input(shape=(window_len_1, n_1_features), name='xyz_inputs') # Encoding xyz_inputs encoder = tf.keras.layers.LSTM(latent_dim, return_state=True, name = 'Encoder') encoder_outputs, state_h, state_c = encoder(xyz_inputs) # Apply the encoder object to xyz_inputs. city_inputs = tf.keras.Input(shape=(window_len_2, n_2_features), name='city_inputs') # Combining city inputs with recurrent branch output decoder_lstm = tf.keras.layers.LSTM(latent_dim, return_sequences=True, name = 'Decoder') x = decoder_lstm(city_inputs, initial_state=[state_h, state_c]) x = tf.keras.layers.Dense(16, activation='relu')(x) x = tf.keras.layers.Dense(16, activation='relu')(x) output = tf.keras.layers.Dense(1, activation='relu')(x) model = tf.keras.models.Model(inputs=[xyz_inputs,city_inputs], outputs=output) optimizer = tf.keras.optimizers.Adam() loss = tf.keras.losses.Huber() model.compile(loss=loss, optimizer=optimizer, metrics=["mae"]) model.summary() ``` Here you are, of course I inserted random numbers for layer, latent dimensions, etc. With such code, you can have different features to input with xyz and city features and these have to passed as arrays. Of course, to predict you have to give the model "xyz_inputs" and city features of the one you want to predict.
Using RNN (LSTM) for predicting one future value of a time series
- The way you are doing it is just fine. The idea in time series prediction is to do regression basically. Probably what you have seen other places in case of vector, it is about the size of the input or basically it means feature vector. Now, assuming that you have t timesteps and you want to predict time t+1, the best way of doing it using either time series analysis methods or RNN models like LSTM, is to train your model on data up to time t to predict t+1. Then t+1 would be the input for the next prediction and so on. There is a good example here. It is based on LSTM using the pybrain framework. - Regarding your question on batch_size, at first you need to understand the difference between batch learning versus online learning. Batch size basically indicates a subset of samples that your algorithms is going to use in gradient descent optimization and it has nothing to do with the way you input the data or what you expect your output to be. For more information on that I suggest you read this kaggle post.
110882
1
110896
null
0
28
I am beginner in working with machine learning. I would like to ask a question that How could I set the same number of datapoints in the different ranges in correlation chart? Or any techniques for doing that? [](https://i.stack.imgur.com/ZrfNh.png). Specifically, I want to set the same number of datapoints in each range (0-10; 10-20;20-30;...) in the image above. Thanks for any help.
How to set the same number of datapoints in the different ranges in correlation chart
CC BY-SA 4.0
null
2022-05-12T09:54:12.373
2022-05-12T18:36:53.287
2022-05-12T14:46:09.027
135662
135662
[ "machine-learning", "dataset", "correlation" ]
You can bin your variables to prevent overplotting and make the output cleaner. Here is an example from StackOverflow: [https://stackoverflow.com/questions/16947210/making-binned-scatter-plots-for-two-variables-in-ggplot2-in-r](https://stackoverflow.com/questions/16947210/making-binned-scatter-plots-for-two-variables-in-ggplot2-in-r) This may not be exactly what you need since you did say you want the same number of datapoints in each range (or bin). You would have to add some code if you wanted that exact format.
Correlations - Get values in the way we want
Several kernel functions can serve as similarity functions (=scores). See a list, for example, [here](https://www.otexts.org/1560). You can try several of them and see which suits you the best. You need something that drops fast at low distances. You can try $$ score = 1/(1+distance)^2$$ and adjust coefficient in front of distance so that the score fits between 0 and 1 ![enter image description here](https://i.stack.imgur.com/oufxf.png) About your picture: what are axis labels? and what are x-ticks?
110884
1
110893
null
0
35
Question: ABC Open University has a Teaching and Learning Analytics Unit (TLAU) which aims to provide information for data-driven and evidence-based decision making in both teaching and learning in the university. One of the current projects in TLAU is to analyse student data and give advice on how to improve students’ learning performance. The analytics team for this project has collected over 10,000 records of students who have completed a compulsory course ABC411 from 2014 to 2019. [](https://i.stack.imgur.com/HEy9r.png) [](https://i.stack.imgur.com/JpvyZ.png)
How do I calculate the accuracy rate of predicting “Fail”? Am I supposed to create a confusion matrix?
CC BY-SA 4.0
null
2022-05-12T12:20:04.943
2022-05-14T10:23:00.237
2022-05-12T15:35:18.237
135672
135672
[ "data-mining" ]
Strictly speaking, calculating accuracy doesn't require the details of a confusion matrix: it's simply the proportion of correct predictions. Since there are 4 possible classes in this exercise and we are interested only in the accuracy of the class 'fail', this means that the 3 other classes are considered like a single class 'not fail'. So to obtain the accuracy of fail, sum: - the number of students predicted as 'fail' who truly fail (True Positive cases) - the numbers of students predicted as 'not fail' who truly don't fail (True Negative cases) And then divide by the total number of students. --- edit to answer comment: the DT shows for every node the proportion of instances by class, for the subset of data that it receives based on the previous conditions (see a short explanation about DTs [here](https://datascience.stackexchange.com/a/108662/64377)). The instances are predicted at the level of leaf nodes, i.e. nodes with no children. The leaf node simply assigns the majority class. For example if we take the leaf node "studied_credits>=82.500" (just below the root), the majority class is 'withdrawn'. This means that the 5565 instances in this leaf are predicted 'withdrawn', which means 'not fail' for our purpose. This includes 1120 instances which actually should be 'fail', so this leaf node results in 4445 TNs and 0 TPs (and also 1120 FNs but we are not interested in those for accuracy). By doing this for every leaf node you should obtain the total number of TPs and TNs. The total number of instances is given in the root node, it's 15370.
To calculate my confusion matrix with recall and precision, my test set need to be equal(balanced)?
I don't think there is any reason to modify the matrix so keep it as it is. Even if you scale it what purpose does it serve? At the end of the day your model does not change even if you modify your confusion matrix. In my opinion you can use other metrics e.g. f1-score (or f beta score), AUC score, etc to judge your model. Confusion matrix only provides visualization where your model "confused" and I would say it is less useful for binary classification (as you only have False positive or False negative). Metrics above serve as better judge for evaluating your model. This is a related question which you can probably [check](https://stackoverflow.com/questions/20927368/python-how-to-normalize-a-confusion-matrix).
110908
1
110911
null
0
278
Let me start by saying my machine learning experience is... dangerous at this stage. I'm still a beginner. I have a binary classification data set of about 100 000 records. 10% of the records are positive and the rest obviously negative. Thus a highly skewed dataset. It is extremely important to maximize the positive (true positive) prediction accuracy (recall) at the expense of negative (true negative) prediction accuracy . Thus, I would rather have an overall 70% accuracy if positive accuracy is 90%+ compared to a low positive accuracy and high overall accuracy. You can already see the issue here. Training the below algorithm obviously optimizes loss for the entire dataset. Thus, priority is given to the negative records which consist of 90% of the dataset. Thus, the overall data set accuracy is high, but the true positive accuracy (recall) is horrible. ``` model = keras.Sequential() model.add(layers.Dense(128, activation='relu', input_dim=35)) model.add(layers.Dense(128, activation='relu')) model.add(layers.Dense(1, activation='sigmoid')) model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy']) ``` One idea would be to try and change the sigmoid threshold to less than 0.5 to try and give preference to recall. But to begin with I have no idea how to do this or if it is even a valid method. Any advice will be appreciated
Keras Binary Classification - Maximizing Recall
CC BY-SA 4.0
null
2022-05-13T09:03:27.753
2022-05-13T12:28:54.217
null
null
135706
[ "classification", "keras", "binary-classification" ]
This is the kind of solution that I was looking for: [https://github.com/huanglau/Keras-Weighted-Binary-Cross-Entropy/blob/master/DynCrossEntropy.py](https://github.com/huanglau/Keras-Weighted-Binary-Cross-Entropy/blob/master/DynCrossEntropy.py)
How to maximize recall?
Train to avoid false negatives What your network learns depends on the loss function you pass it. By choosing this function you can emphasize various things - overall accuracy, avoiding false negatives, false positives etc. In your case you probably use a cross entropy loss in combination with a softmax classifier. While softmax squashes the prediction values to be 1 when combined across all classes, the cross entropy loss will penalise the distance between the actual ground truth and the prediction. In this calculation it will not take into account what the values of the "false negative" predictions are. In other words: The loss function only cares for the correct class and its related prediction, not for the values of all other classes. Since you want to avoid false negatives this behaviour is probably the exact thing you need. But if you also want the distance between the actual class and the false predictions another loss function that also takes into account the false values might even serve you better. Give your high accuracy this poses the risk that your overall performance will drop. What to do then? Making the wrong prediction and being very sure about it is not uncommon. There are millions of things you could look at, so your best guess probably is to investigate the error. E.g. you could use a confusion matrix to recognize patterns which classes are mixed with which. If there is structure you might need more samples of a certain class or there are probably labelling errors in your training data. Another way to go ahead would be to manually look at all (or some) examples of errors. Something very basic as listing the errors in a table and trying to find specific characteristics can guide you towards what you need to do. E.g. it would be understandable if your network usually gets the "difficult" examples wrong. But maybe there is some other clear systematic your network did not pick up yet due to lack of data?
110922
1
110924
null
0
177
I noticed that I am getting different feature importance results with each random forest run even though they are using the same parameters. Now, I know that a random forest model takes observations randomly which is causing the importance levels to vary. This is especially shown for the less important variables. My question is how does one interpret the variance in random forest results when running it multiple times? I know that one can reduce the instability level of results by increasing the number of trees; however, this doesn't really tell me if my feature importance results are "true" though they may be true for that specific run (but not necessarily for a separate run). Even if I were to take an extremely large number of trees and average the feature importance results for each variable, that still doesn't necessarily confirm that it will produce the same importance results if I repeat that exact same process again. Additionally, I have tried it with an extremely large number of trees and still got a slight variation (it did significantly reduce the variance of my results) in my feature importance results between runs. Is there any method that I can use to interpret this variance of importance between runs? I cannot set a seed because I need stable (similar) results across different seeds. Any help at all would be greatly appreciated!
Interpreting the variance of feature importance outputs with each random forest run using the same parameters
CC BY-SA 4.0
null
2022-05-13T15:51:00.320
2022-05-13T18:46:28.667
2022-05-13T16:01:08.987
135581
135581
[ "machine-learning", "random-forest", "feature-importances", "predictor-importance" ]
Random Forests are full of 'randomness', from selecting and resampling the actual data (bootstrapping) to selection of the best features that go into the individual decision trees. So with all of this sampling going on the starting seed will affect all of these intermediate results as well as the final set of trees. Since you asked about the feature importance it will also affect the ranking as well. So it is always best to keep the seed the same. If you results are changing, and you are doing multiple runs, averaging the feature importance of all of the runs should give you a good idea of what the 'true' value should be.
Variable Importance Random Forest on R
What the function is doing for each variable? - Record the Out-Of-Bag (OOB) accuracy for each tree. - "shuffle" or permute the values of that variable. This means you take all the values of that variable in the data, and assign those values randomly back out to the observations, which is a way of introducing noise and getting rid of the signal that that variable provided. - Now it finds the OOB accuracy again, but this time the values for that variable are incorrect since we permuted it. By introducing noise where your model expects signal, you should see a decrease in performance. - Compare the original accuracy in (1) to the accuracy in (3) for each variable. If the model performance decreases a lot for a variable in step (3) compared to (1), then it is deemed to have greater importance. Why does removing the most important variable not have a negative affect on accuracy? (my guess) Probably because that important variable is correlated with some other variable(s) you have. Your model can capture the information contained in that missing important variable by using a few other variables to make up for it. When you drop the important variable, which other variables see a notable gain?
110938
1
110939
null
0
40
I'm starting to learn how convolutional neural networks work, and I have a question regarding the filters. Are these chosen manually or are they generated by the network in training? If it's the latter, are the coefficients in the filters chosen at random, and then as the network is trained they are "corrected"? Any help or insight you might be able to provide me in this matter is greatly appreciated!
Coefficients values in filter in Convolutional Neural Networks
CC BY-SA 4.0
null
2022-05-14T18:56:05.743
2022-05-14T19:18:56.267
null
null
135743
[ "neural-network", "convolutional-neural-network", "training" ]
The values in the filters are parameters that are learned by the network during training. When creating the network the values are initialized randomly according to some initialization scheme (e.g. Kaiming He initialization) and then during training are updated to achieve a lower loss (i.e. the learning process).
Filters in convolutional autoencoders
Autoencoders are meant to reduce the dimensionality of your data. Increasing the number of filters would do the opposite.
110968
1
110981
null
1
92
There are several popular word embeddings available (e.g., [Fasttext](https://arxiv.org/abs/1607.04606) and [GloVe](https://nlp.stanford.edu/pubs/glove.pdf)); In short, those embeddings are a tool to encode words along with a sensible notion of semantics attached to those words (i.e. words with similar sematics are nearly parallel). Question: Is there a similar notion of character embedding? By 'character embedding' I understand an algorithm that allow us to encode characters in order to capture some syntactic similarity (i.e. similarity of character shapes or contexts).
Is there a sensible notion of 'character embeddings'?
CC BY-SA 4.0
null
2022-05-15T19:17:03.990
2022-05-16T09:34:24.623
2022-05-15T19:29:32.313
113124
113124
[ "nlp", "word-embeddings", "embeddings" ]
Yes, absolutely. First it's important to understand that word embeddings accurately represent the semantics of the word because they are trained on the context of the word, i.e the words close to the target word. This is just another application of the old principle of [distributional semantics](https://en.wikipedia.org/wiki/Distributional_semantics). Characters embeddings are usually trained the same way, which means that the embedding vectors also represent the "usual neighbours" of a character. This can have various applications in string similarity, word tokenization, stylometry (representing an author's writing style), and probably more. For example, in languages with accentuated characters the embedding for `é` would be closely similar to the one for `e`; `m` and `n` would be closer than `x` and `f` .
Character Level Embeddings
It completely depends on what you're classifying. Using character embeddings for semantic classification of sentences introduces unnecessary complexity, making the data harder to fit. Although using n-grams would help the model deal with word derivatives. Classifying words based on their derivative would be a task that would require character embeddings. If you're asking whether it would be useful to train a model to embed characters like you would with word2vec- then no. And in fact would probably yield bad results. We use embeddings to implicitly encode that two data points are close together and therefore should be treated more similar to the model. The letter 'd' shouldn't be semantically closer to 'e' than 'q'.
110979
1
111147
null
0
1071
I am working with lots of data (we have a table that produces 30 million rows daily). What is the best way to explore it (do on EDA)? Take a frictional slicing of the data randomly (100000 rows) or select the first 100000 rows from the entire dataset or should i take all the dataset WHAT SHOULD I DO? thanks!!!!
Exploratory data analysis (EDA) on large dataset
CC BY-SA 4.0
null
2022-05-16T08:25:09.940
2022-05-19T21:56:05.637
null
null
128577
[ "machine-learning", "deep-learning", "scikit-learn", "pandas", "pyspark" ]
You mentioned data is added daily. A lot of this has to do with how your data is structured and if recent data is more important than older data. It might be easier to take a random sample from recent data. But if you are looking over all of the dates, you could sample different periods. but the statistical answer also has to do with how many variables you are looking at. Practically, you might want to start with a 'reasonable' number of rows that are easy to get, do basic EDA for missing values, rules of thumb like insuring you have a minimum count for performing things like a regression. Then increase the number to the level that you would need to have a recognizable distribution for all of the variables you are interested in. What you often miss when taking random samples are outliers, so it is always useful to ask the business what they expect the upper and lower ranges to be
How to choose variables to perform Exploratory Data Analysis
There are a couple options: - Iterate over all input features & calculate it's correlation to the target variable. Gather all these numbers and sort them by absolute value. Take the top 10 or 20 as a chunk to start investigating these features with more attention (absolute value because you care about strong negative correlations just as much as strong positive correlations) - Train a simple Decision Tree on the inputs mapping to the output. Once the decision tree is trained take a look at the feature importance(s) that the decision tree uncovered - begin your investigation here. You can repeat this process with a linear regression too. - Plot all 1-to-1 plots of all input variables to the target and manually look through them (this takes more time as you need to look through as many plots as you have input variables - but it will give you a good understanding of your data once you go through it all)
110999
1
111166
null
2
188
I have a set of email address e.g. guptamols@gmail.com, neharaghav@yahoo.com, rkart@gmail.com, squareyards321@ymail.com..... Is it possible to apply ML/Mathematics to generate category (like NER) from Id (part before @). Problem with straight forward application of NER is that the emails are not proper english. - guptamols@gmail.com > Person - neharaghav@yahoo.com > Person - rkart@gmail.com > Company - yardSpace@ymail.com > Company - AgraTextile@google.com > Place/Company
Entity Embeddings of email address
CC BY-SA 4.0
null
2022-05-16T17:19:01.163
2022-05-20T14:48:04.347
2022-05-17T19:34:31.133
68274
68274
[ "machine-learning", "nlp", "named-entity-recognition" ]
It is possible but you would need lot of training data to reach a good result, because there is a wide variety of family and company names. Fortunatelly, there could be an efficient solution to make a good classification. My advice is to focus on human names recognition on one side, company name recognition on the other side, and then apply ML. For human names, there are plenty of datasets available to recognize family names and first names that you can filter in the fields (ex: Gupta is recognized in "guptamols" => Name). For company names, you can use dictionaries in english or any other language to detect lot of names (ex: textile is recognized in AgraTextile). Once you do this safe classification, you would have lot of valuable labelled data, by which a NLP model (like Bert - I would recommend a byte per byte embedding as there could be special characters in companies) could learn patterns in order to classify the rest of the unknown data easily. Note: Such models give a probability chance for each case that could be useful to limit the risk of wrong classification.
Using word embeddings with additional features
My first comment would be that you have to remember that Tree-based models are not scale-sensitive and therefore scaling should not affect model's performance, so as you well mention it should a problem with the feature itself. If anyway you want to scale all your features you could use MinMaxScaler with the min and max values, being the min and max fo the Glove Vectors so that all the features are on the same scale
111024
1
111029
null
0
16
Here is my code; ``` file_name = ['0a57bd3e-e558-4534-8315-4b0bd53df9d8.jpeg', '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'] img_id = {} images = [] for e, i in enumerate(range(len(file_name))): img_id['file_name'] = file_name[e] images.append(img_id) print(images) ``` The output is; ``` [{'file_name': '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'}, {'file_name': '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'}] ``` I want it to be; ``` [{'file_name': '0a57bd3e-e558-4534-8315-4b0bd53df9d8.jpeg'}, {'file_name': '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'}] ``` I don't know, why it is saves only the last file name in the dictionary?
Can anyone tell me how can I get the following output?
CC BY-SA 4.0
null
2022-05-17T12:40:59.527
2022-05-17T13:18:58.077
null
null
112583
[ "python" ]
You are overwriting the data stored in `img_id` because you are using the same dictionary with the same key (`file_name`). You can either reset the `img_id` variable to an empty dictionary within your for loop or use a simpler list comprehension: ``` file_name = ['0a57bd3e-e558-4534-8315-4b0bd53df9d8.jpeg', '20d721fc-c443-49b2-aece-fd760f13ff7e.jpeg'] images = [] for e, i in enumerate(range(len(file_name))): img_id = {} img_id['file_name'] = file_name[e] images.append(img_id) # or [{"file_name": x} for x in file_name] ```
Incorrect output dimension?
The error message is pretty clear... you feed a vector of length 24 to your model, but your model is outputting a vector of length 1. Change : ``` dense3 = Dense(1 ,activation = 'softmax')(dense2) ``` to : ``` dense3 = Dense(24 ,activation = 'softmax')(dense2) ```
111046
1
111058
null
1
1106
I am trying to understand RNN. I got a good sense of how it works on theory. But then on PyTorch you have two extra dimensions to your input data: batch size (number of batches) and sequence length. The model I am working on is a simple one to one model: it takes in a letter than estimates the following letter. The model is provided [here](https://github.com/alperenevrin/text-writing-lstm/blob/main/Character_Level_RNN.ipynb). First please correct me if I am wrong about the following: Batch size is used to divide the data into batches and feed it into model running in parallels. At least this was the case in regular NNs and CNNs. This way we take advantage of the processing power. It is not "ideal" in the sense that in theory for RNN you just go from one end to another one in an unbroken chain. But I could not find much information on sequence length. From what I understand it breaks the data into the lengths we provide, instead of keeping it as an unbroken chain. Then unrolls the model for the length of that sequence. If it is 50, it calculates the model for a sequence of 50. Let's think about the first sequence. We initialize a random hidden state, the model first does a forward run on these 50 inputs, then does backpropagation. But my question is, then what happens? Why don't we just continue? What happens when it starts the new sequence? Does it initialize a random hidden state for the next sequence or does it use the hidden state calculated from the very last entry from the previous sequence? Why do we do that, and not just have one big sequence? Does not this break the continuity of the model? I read somewhere it is also memory related; if you put the whole text as sequence, gradient calculation would take the whole memory it said. Does it mean it resets the gradients after each sequence? Thank you very much for the answers
What is the purpose of Sequence Length parameter in RNN (specifically on PyTorch)?
CC BY-SA 4.0
null
2022-05-17T20:51:32.943
2022-05-18T06:36:13.410
null
null
135876
[ "lstm", "rnn", "pytorch" ]
- The RNN receives as input a batch of sequences of characters. The output of the RNN is a tensor with sequences of character predictions, of just the same size of the input tensor. - The number of sequences in each batch is the batch size. - Every sequence in a single batch must be the same length. In this case, all sequences of all batches have the same length, defined by seq_length. - Each position of the sequence is normally referred to as a "time step". - When back-propagating an RNN, you collect gradients through all the time steps. This is called "back-proparation through time (BPTT)". - You could have a single super long sequence, but the memory required for that would be large, so normally you must choose a maximum sequence length. - To somewhat mitigate the need of cutting the sequences, people normally apply something called "truncated BPTT". That is what the code you linked uses. It consists of having the sequences in the batches arranged so that each of the sequences in the next batch are the continuation of the text from each of the sequences in the previous batch, together with reusing the last hidden state of the previous batch as the initial hidden state of the next one.
Can bidirectional RNN use variable sequence length?
The short answer is no, a bidirectional architecture will still take in a variable sequence length. To understand why, you should understand how padding works. For example, let's say you are implementing a bidirectional LSTM-RNN in tensorflow on variable length time series data for multiple subjects. The input is a 3D array with shape: `[n_subjects, [n_features, [n_timesteps...] ...] ...]` so to ensure that the array has consistent dimensions, you pad the other subject's features up to the length of the subject with features measured for the longest period of time. Let's say subject 1 has one feature with `values = [22,20,19,21,33,22,44,21,19,26,27]` measured at `times = [0,1,2,3,4,5,6,7,8,9,10]`. subject 2 has one feature with `values = [21,12,22,30,13,42,20]` measured at `times = [0,1,2,3,4,5,6]`. You would pad features for Subject 2 by extending the array so that the `padded_values = [21,12,22,30,13,42,20,0,0,0,0]` at `times = [0,1,2,3,4,5,6,7,8,9,10]`, then do the same thing for every subsequent subject. This means the number of timesteps for each subject can be variable, and the merge you refer to occurs with the dimension for that particular subject. Below is an example of a bidirectional LSTM-RNN architecture for a model that predicts sleep stages for different subjects using biometric features measured over variable lengths of time. [](https://i.stack.imgur.com/N7u3w.png)
111047
1
111305
null
-1
150
I am sorry if this is a well-known phenomenon but I can't quite wrap my head around this. I have a related question: [How To Develop Cluster Models Where the Clusters Occur Along Subsets of Dimensions in Multidimensional Data?](https://datascience.stackexchange.com/questions/110812/how-to-develop-cluster-models-where-the-clusters-occur-along-subsets-of-dimensio). There are good answers for feature selection and cluster metrics but I think this phenomenon deserves special attention. I have simulated 3 clusters along 1 dimension, and then simulated 3 clusters along 2 dimensions, and then combined them into a dataset with all 3 dimensions. My hope was that cluster algorithms would identify the 3 clusters along dimension 1 and the 3 clusters along dimensions 2 and 3, for a total of 6 clusters. The cluster algorithms do not correctly identify the 6 clusters. When I visualize the simulated data in 3 dimensions, there are 9 apparent clusters instead of the 6 that I simulated. Can someone explain why two sets of independent, lower-dimensional clusters form apparent clusters in a higher-dimensional space? I am concerned about the impact of this phenomena when developing cluster models with real data if independent clusters along subsets of dimensions form apparent but presumably misleading clusters in higher dimensions. UPDATE: lpounng has described how actual clusters can result in apparent clusters. I am adding a bounty in the hopes that someone can describe this problem more canonically and perhaps describe a solution. Consider another example. I have simulated 2 clusters: persons with high blood sugar and high blood pressure, and persons with normal blood sugar and normal blood pressure. I have simulated 3 other unrelated clusters: persons with no injuries, a medium number of injuries, and a high number of injuries. There are 5 actual clusters and 6 apparent clusters. KMeans finds the 6 apparent clusters correctly. The problem is that the KMeans clusters misleadingly imply that blood sugar, blood pressure, and injury cluster together. Is there a solution to this problem? Brian Spiering recommended the [https://github.com/danilkolikov/fsfc](https://github.com/danilkolikov/fsfc) library but I can't get the algorithms to distinguish the actual clusters from the apparent clusters. ``` np.random.seed(20220519) b_hh = np.random.normal(size = (2000, 2)) + [10, 150] # High blood sugar and high blood pressure cluster. b_ll = np.random.normal(size = (4000, 2)) + [ 2, 100] # Normal blood sugar and normal blood pressure cluster. b = np.concatenate((b_hh, b_ll), axis = 0) np.random.shuffle(b) i_h = np.random.normal(size = ( 100, 1)) + 30 # High injury cluster. i_m = np.random.normal(size = ( 900, 1)) + 15 # Medium injury cluster. i_l = np.random.normal(size = (5000, 1)) + 0 # No injury cluster. i = np.concatenate((i_h, i_m, i_l), axis = 0) np.random.shuffle(i) X = np.concatenate((b, i), axis = 1) ``` ORIGINAL CODE: Imports: ``` import numpy as np, matplotlib.pyplot as plt, plotly.graph_objects as go, plotly.io as pio pio.renderers.default = 'browser' from sklearn.datasets import make_blobs from sklearn.cluster import KMeans ``` Function to plot in 3 dimensions with plotly: ``` def c_3D(algorithm, data, o = 0.25, x_name = 'X Axis', y_name = 'Y Axis', z_name = 'Z Axis'): m = algorithm traces = [] for i in np.unique(m): trace = go.Scatter3d( x = data[m == i, 0], y = data[m == i, 1], z = data[m == i, 2], name = 'Cluster ' + str(i), mode = 'markers', marker = dict(size = 5, opacity = o, color = i)) traces.append(trace) layout = go.Layout(autosize = False, width = 1000, height = 1000, margin = dict(l = 0, r = 0, b = 0, t = 0), scene = dict(xaxis_title = x_name, yaxis_title = y_name, zaxis_title = z_name)) fig = go.Figure(data = traces, layout = layout) fig.show() ``` Simulate data: ``` np.random.seed(20220516) # Simulate 3 clusters along 1 dimension. X_1, Y_1 = make_blobs(n_samples = 5000, n_features = 1, centers = 3, cluster_std = 0.3) # Simulate 3 clusters along 2 dimensions. X_2, Y_2 = make_blobs(n_samples = 5000, n_features = 2, centers = 3, cluster_std = 0.3) # Combine dimensions. X = np.concatenate((X_1, X_2), axis = 1) print(X.shape) ``` Visualize the 3 clusters along dimension 1: ``` plt.scatter(X[:, 0], X[:, 0]) ``` [](https://i.stack.imgur.com/w5Ivf.png) Visualize the 3 clusters along dimensions 2 and 3: ``` plt.scatter(X[:, 1], X[:, 2]) ``` [](https://i.stack.imgur.com/iEMMA.png) Visualize the clusters in 3 dimensions: ``` def SetColor(c): if c == 0: return 'black' c_3D(np.array(list(map(SetColor, np.zeros(X.shape[0])))), X) ``` [](https://i.stack.imgur.com/OoE0v.png)
Why Do a Set of 3 Clusters Across 1 Dimension and a Set of 3 Clusters Across 2 Dimensions Form 9 Apparent Clusters in 3 Dimensions?
CC BY-SA 4.0
null
2022-05-17T21:03:33.647
2022-05-25T18:45:17.150
2022-05-19T21:15:13.677
58488
58488
[ "python", "clustering", "dimensionality-reduction" ]
It appears that clusters can form geometrically in higher-dimensional space with any dimensions that have clusters in lower-dimensional spaces. These apparent clusters may not reflect the actual clustering processes. I have been able to get the results I expect with the idea that dimensions with actual clusters should correlate with each other. I apply clustering algorithms to those subsets of the dimensions that correlate with each other. Simulate blood sugar, blood pressure, and injury clusters: ``` np.random.seed(20220519) b_hh = np.random.normal(size = (2000, 2)) + [10, 150] # High blood sugar and high blood pressure cluster. b_ll = np.random.normal(size = (4000, 2)) + [ 2, 100] # Normal blood sugar and normal blood pressure cluster. b = np.concatenate((b_hh, b_ll), axis = 0) np.random.shuffle(b) i_h = np.random.normal(size = ( 100, 1)) + 30 # High injury cluster. i_m = np.random.normal(size = ( 900, 1)) + 15 # Medium injury cluster. i_l = np.random.normal(size = (5000, 1)) + 0 # No injury cluster. i = np.concatenate((i_h, i_m, i_l), axis = 0) np.random.shuffle(i) X = np.concatenate((b, i), axis = 1) ``` Compute correlation coefficients between dimensions: ``` from scipy.stats import pearsonr, spearmanr print(pearsonr(X[:, 0], X[:, 1])) print(pearsonr(X[:, 0], X[:, 2])) print(pearsonr(X[:, 1], X[:, 2])) print(spearmanr(X[:, 0], X[:, 1])) print(spearmanr(X[:, 0], X[:, 2])) print(spearmanr(X[:, 1], X[:, 2])) ``` I imagine this solution may not work for data where linear correlations do not make sense, such as data that favour density-based clustering algorithms.
Why the Silhouette Score and optimal number of Cluster changes when using 2D and 3D data?
Yes it can happen. In fact it is quite normal since there are different clusters in 2D and different in 3D, since more or less information is added to data (by having more dimensions). This is a by-product of the [curse of dimensionality](https://en.wikipedia.org/wiki/Curse_of_dimensionality). Adding as more relevant information as possible would make clusters more close to underlying groups. So 3D would be better than 2D. This is a general observation. There are of course cases where projecting data in a low-dimensionalal manifold is indeed better since it can eliminate noise and/or capture specific attributes better than clustering on all (possibly irrelevant) dimensions (another by-product of the curse of dimensionality). If the relevant information in your data has low dimensionality but this information is correlated along many dimensions in the original data then a feature extraction method is needed in order to capture the low-dimensional relevant information from original data (eg PCA, ICA ,..). For some references along this direction see for example: - How to cluster in High Dimensions - An investigation of K-means clustering to high and multi-dimensional biological data - How do I know my k-means clustering algorithm is suffering from the curse of dimensionality?
111048
1
111051
null
0
61
I have a set of data with some numerical features and some string data. The string data is essentially a set of classes that are not inherently related. For example: ``` Sample_1,0.4,1.2,kitchen;living_room;bathroom Sample_2,0.8,1.0,bedroom;living_room Sample_3,0.5,0.9,None ``` I want to implement a classification method with these string-subclasses as a feature; however, I don't want to have them be numerically related or have the comparisons be directly based on the string itself. Additionally, if samples have no data in this column they should not be inherently related. Is there a way to implement these features as "classes" in a way that doesn't rely on a distance metric? I originally wanted to try converting the classes directly to numerical data, but I am worried that arbitrarily class 1 would be considered more closely related to class 2 than class 43.
Using Sci-Kit Learn Clustering and/or Random-Forest Classification on String Data with Multiple Sub-Classifications
CC BY-SA 4.0
null
2022-05-17T21:16:36.403
2022-05-18T00:28:59.683
2022-05-17T21:18:16.077
135879
135879
[ "machine-learning", "python", "scikit-learn", "machine-learning-model", "multiclass-classification" ]
You use something called "dummy encoding".
strings as features in decision tree/random forest
In most of the well-established machine learning systems, categorical variables are handled naturally. For example in R you would use factors, in WEKA you would use nominal variables. This is not the case in scikit-learn. The decision trees implemented in scikit-learn uses only numerical features and these features are interpreted always as continuous numeric variables. Thus, simply replacing the strings with a hash code should be avoided, because being considered as a continuous numerical feature any coding you will use will induce an order which simply does not exist in your data. One example is to code ['red','green','blue'] with [1,2,3], would produce weird things like 'red' is lower than 'blue', and if you average a 'red' and a 'blue' you will get a 'green'. Another more subtle example might happen when you code ['low', 'medium', 'high'] with [1,2,3]. In the latter case it might happen to have an ordering which makes sense, however, some subtle inconsistencies might happen when 'medium' in not in the middle of 'low' and 'high'. Finally, the answer to your question lies in coding the categorical feature into multiple binary features. For example, you might code ['red','green','blue'] with 3 columns, one for each category, having 1 when the category match and 0 otherwise. This is called one-hot-encoding, binary encoding, one-of-k-encoding or whatever. You can check documentation here for [encoding categorical features](http://scikit-learn.org/stable/modules/preprocessing.html) and [feature extraction - hashing and dicts](http://scikit-learn.org/stable/modules/feature_extraction.html#dict-feature-extraction). Obviously one-hot-encoding will expand your space requirements and sometimes it hurts the performance as well.
111122
1
111228
null
0
193
I am exploring using CNNs for multi-class classification. My model details are: [](https://i.stack.imgur.com/WK57J.png) and the training/testing accuracy/loss: [](https://i.stack.imgur.com/rT500.png) As you can see from the image, the accuracy jumped from 0.08 to 0.39 to 0.77 to 0.96 in few epochs. I have tried changing the details of the model (number of filters, kernel size) but I still note the same behavior and I am not experienced in deep learning. Is this behavior acceptable? Am I doing something wrong? To give some context. My dataset contains power traces for a side channel attack on EdDSA implementation. Each trace has 1000 power readings.
How to verify if the behavior of CNN model is correct?
CC BY-SA 4.0
null
2022-05-19T09:13:18.327
2022-05-23T11:10:58.530
2022-05-20T16:33:08.917
29169
135950
[ "tensorflow", "cnn", "accuracy" ]
Do you have the same results in the next epochs? If yes, your learning rate might be too high: Are you using an Adam optimizer? It could also happen with some other hyper parameters like: - A too high dropout that resets too much your neural network. If it is set to 0.5 or more, you could try with a lower value like 0.1 or 0.2. - A bad weigh initialization (use random or Xavier for good results for instance). In order to be more specific, I would need to read part of the code.
Checking trained CNN on the images
`torch.argmax` has an extra argument `dim` which you can specify such that the maximum value is taken over a specific dimension. If you specify the dimension which represents the number of images it will return an array of indices where each value is for one image. For example: ``` import torch # 3 images with 5 classes t = torch.randn(3, 5) # tensor([[-1.2917, 1.3740, 0.6967, -0.0575, 0.3702], # [ 0.5428, 1.0863, 0.3951, 1.8535, 1.0926], # [ 0.5865, 0.8522, -0.6858, 0.5297, -0.1320]]) # get the argmax over the first dimension, which specifies the number of images torch.argmax(t, dim=1) # tensor([1, 3, 1]) ```
111128
1
111139
null
1
506
I'm trying to describe mathematically how stochastic gradient descent could be used to minimize the binary cross entropy loss. The typical description of SGD is that I can find online is: $\theta = \theta - \eta *\nabla_{\theta}J(\theta,x^{(i)},y^{(i)})$ where $\theta$ is the parameter to optimize the objective function $J$ over, and x and y come from the training set. Specifically the $(i)$ indicates that it is the i-th observation from the training set. For binary cross entropy loss, I am using the following definition (following [https://arxiv.org/abs/2009.14119](https://arxiv.org/abs/2009.14119)): $$ L_{tot} = \sum_{k=1}^K L(\sigma(z_k),y_k)\\ L = -yL_+ - (1-y)L_- \\ L_+ = log(p)\\ L_- = log(1-p)\\ $$ where $\sigma$ is the sigmoid function, $z_k$ is a prediction (one digit) and $y_k$ is the true value. To better explain this, I am training my model to predict a 0-1 vector like [0, 1, 1, 0, 1, 0], so it might predict something like [0.03, 0.90, 0.98, 0.02, 0.85, 0.1], which then means that e.g. $z_3 = 0.98$. For combining these definitions, I think that the binary cross entropy loss is minimized by using the parameters $z_k$ (as this is what the model tries to learn), so that in my case $\theta = z$. Then in order to combine the equations, what I would think makes sense is the following: $z = z - \eta*\nabla_zL_{tot}(z^{(i)},y^{(i)})$ However I am unsure about the following: - One part of the formula contains $z$, and another part contains $z^{(i)}$, this doesn't make much sense to me. Should I use only $z$ everywhere? But then how would it be clear that we have prediction $z$ for the true $y^{(i)}$? - In the original SGD formula there is also an $x^{(i)}$. Since this is not part of the binary cross entropy loss function, can I just omit this $x^{(i)}$? Any help with the above two points and finding the correct equation for SGD for binary cross entropy loss would be greatly appreciated.
Understanding SGD for Binary Cross-Entropy loss
CC BY-SA 4.0
null
2022-05-19T14:03:18.843
2022-05-19T18:07:05.317
null
null
135800
[ "machine-learning", "gradient-descent", "multilabel-classification", "mathematics", "sgd" ]
You are confusing a number of definitions. The loss definition you provided is correct, yet the terms you used are not precise. I'll try to make the following concepts clearer for you: parameters, predictions and logits. I want you to focus on the logit concept, which is I believe the issue here. First, binary classification is a learning task where we want to predict which of two classes 0 (negative class) and 1 (positive class) an example $x$ comes from. Binary cross entropy is a loss function that is frequently used for such tasks. And, to use this loss function, the model is expected to output one real number $\hat{y} \in [0,1]$ for each example $x$. $\hat{y}$ represents the probability that the example is from the positive class 1. I'd rather write the loss as follows: $$\begin{align} L &= \sum_{i=1}^n l(\hat{y_i}, y_i)\\ l(\hat{y_i}, y_i) &= -y_i log(\hat{y_i}) -(1-y_i) log(1-\hat{y_i}) \end{align}$$ Now, the way our predictions $\hat{y}$ are computed depends on the family of models we choose to use. For example, if you use a logistic regression model, the model computes predictions as follows $\hat{y} = \sigma(z)$, where $z \in \mathbb{R}$ is called the logit (not the prediction) and $\sigma$ is the sigmoid function. In logistic regression, the logit is a linear function of your features $z = \theta x$, where $\theta$ is the parameter vector (which is independent from your set of examples) and $x$ is the example vector. So, $$\hat{y_i} = \sigma(z_i) = \sigma(\theta x_i) $$ In this case, the loss becomes: $$\begin{align} L &= \sum_{i=1}^n -y_i log(\hat{y_i}) -(1-y_i) log(1-\hat{y_i}) \\ &= \sum_{i=1}^n -y_i log(\sigma(\theta x_i) ) -(1-y_i) log(1-\sigma(\theta x_i) ) \end{align}$$ Now, compute the gradient of $L$ with respect to $\theta$ and plug it in your SGD update rule. To summarize, predictions are related to logits by the sigmoid function, and logits are related to example features by model parameters. I used logistic regression to simplify the discussion. Using a neural network, the relationship between logits and model parameters becomes more complicated. Last, I want to clarify that SGD can be used with a variety of models, so when you say it contains $x_i$ in its formula, you need to specify which family of models you are talking about.
Cross-entropy loss explanation
The [cross entropy](https://en.wikipedia.org/wiki/Cross_entropy) formula takes in two distributions, $p(x)$, the true distribution, and $q(x)$, the estimated distribution, defined over the discrete variable $x$ and is given by ## $$H(p,q) = -\sum_{\forall x} p(x) \log(q(x))$$ For a neural network, the calculation is independent of the following: - What kind of layer was used. - What kind of activation was used - although many activations will not be compatible with the calculation because their outputs are not interpretable as probabilities (i.e., their outputs are negative, greater than 1, or do not sum to 1). Softmax is often used for multiclass classification because it guarantees a well-behaved probability distribution function. For a neural network, you will usually see the equation written in a form where $\mathbf{y}$ is the ground truth vector and $\mathbf{\hat{y}}$ (or some other value taken direct from the last layer output) is the estimate. For a single example, it would look like this: ## $$L = - \mathbf{y} \cdot \log(\mathbf{\hat{y}})$$ where $\cdot$ is the inner product. Your example ground truth $\mathbf{y}$ gives all probability to the first value, and the other values are zero, so we can ignore them, and just use the matching term from your estimates $\mathbf{\hat{y}}$ $L = -(1\times log(0.1) + 0 \times \log(0.5) + ...)$ $L = - log(0.1) \approx 2.303$ An important point from comments > That means, the loss would be same no matter if the predictions are $[0.1, 0.5, 0.1, 0.1, 0.2]$ or $[0.1, 0.6, 0.1, 0.1, 0.1]$? Yes, this is a key feature of multiclass logloss, it rewards/penalises probabilities of correct classes only. The value is independent of how the remaining probability is split between incorrect classes. You will often see this equation averaged over all examples as a cost function. It is not always strictly adhered to in descriptions, but usually a loss function is lower level and describes how a single instance or component determines an error value, whilst a cost function is higher level and describes how a complete system is evaluated for optimisation. A cost function based on multiclass log loss for data set of size $N$ might look like this: ## $$J = - \frac{1}{N}\left(\sum_{i=1}^{N} \mathbf{y_i} \cdot \log(\mathbf{\hat{y}_i})\right)$$ Many implementations will require your ground truth values to be one-hot encoded (with a single true class), because that allows for some extra optimisation. However, in principle the cross entropy loss can be calculated - and optimised - when this is not the case.
111148
1
111149
null
0
284
I want to implement an FPGA code or hardware code of a Keras model. As a first step, I want to find the number of mathematical operations required to evaluate a predicted output given a model. The model below is a two-class classifier and a sample of input is a vector of size 232X1. The model is: ``` model.add(keras.layers.Dense(5, input_dim=232, activation='relu')) model.add(keras.layers.Dense(1, activation='sigmoid')) ``` The question is given in the model above, how many mathematical operations (plus, minus, multiplication, division) are required to find the output value. In my understanding since there are 5 output neurons in the first layer we have 5232 weights so we need to calculate 5232 multiplication in the first stage and next as we have 5 relu activation calculation. As there are no other layers except the last layer, which is just the output we need only 5 multiplication and 5 sigmoid calculation, and 5 addition. Is the above approach correct?
How to find the number of operation ( multiplication or addition etc) required given a Keras model?
CC BY-SA 4.0
null
2022-05-19T23:24:47.203
2022-05-20T10:47:49.110
null
null
135976
[ "keras", "hardware" ]
To compute the number of elementary operations, you need to understand what is happening under the hood. Let $x$ be an input vector of size $n$. Given such a vector, a dense layer of $m$ units with an activation function $f$ will execute the following operation: $$a = f(Wx + b)$$ $W$ is the weight matrix associated with the dense layer (its size is $m \times n$) and $b$ is the bias vector (of size $m$). We can derive from this formulation the following: - The number of multiplications is $mn$ (this comes from the definition of the product $Wx$). - The number of additions is $m(n-1) + m = mn$, where $m(n-1)$ comes from the definition of $Wx$ again, and $m$ comes from adding the bias vector $b$. Then, $f$ is applied $m$ times (once on each component of the resulting vector $Wx +b$). Applying this to your example: - Layer 1 does $5 \times 232 = 1160$ multiplications and $1160$ additions, and applies $ReLU$ 5 times (because $m=5$, $n=232$ and $f=ReLU$), - Layer 2 does $1 \times 5 = 5$ multiplications and $5$ additions, and applies $\sigma$ the sigmoid function 1 time only (because $m=1$, $n=5$ and $f=\sigma$) The total number of multiplications is: $1165$ and the total number of additions is: $1165$.
Amount of multiplications in a neural network model
Essentially you are correct, there are a lot of calculations necessary to process inputs and train neural networks. You have some terminology a bit wrong or vague. E.g. > In a feedforward meural network each neuron of the first layer multiplied with all the neurons of the second layer. The neurons do not multiply together directly. A common way to write the equation for a neural network layer, calling input layer values $x_i$ and first hidden layer values $a_j$, where there are N inputs might be $$a_j = f( b_j + \sum_{i=1}^{N} W_{ij}x_{i})$$ where $f()$ is the activation function $b_j$ is the bias term, $W_{ij}$ is the weight connecting $a_j$ to $x_i$. So if you have $M$ neurons in the hidden layer, you have $N\times M$ multiplications and $M$ separate sums/additions over $N+1$ terms, and $M$ applications of the transfer function $f()$ > And in addition to a forward pass in a typical Neural network they also have a backward pass that because of my calculations doing earlier they are 1,800 derivatives (gradient) for the entire backward pass. It doesn't work quite so directly, and there as a small factor of more calculations involved (you do not calculate each derivative with a single multiplication, often there are a few, some results are re-used, and other operations may be involved). However yes you do need to calculate a derivative for each weight and bias term, and there are roughly that number of weights in your network that require the calculations done. Your suggested numbers are actually quite small compared to typical neural networks used for image problems. These typically perform millions of computations for a forward pass. > That's why a CPU computer takes so long to train a model because it has to do about 3,600 (1,800 + 1,800 ) mathematical operations. Actually that is a trivial number of calculations for a modern CPU, and would be done in less than a millisecond. But multiply this out by a few factors: - You must do this for each and every example in the training data - Your example network is small, think bigger - This does not include the activation function calculations - typically slower than a multiply - Your rough estimate ignores some of the necessary operations, so as a guesstimate, multiply number of CPU-level operations by 3 or 4 from your analysis. . . . and the number of operations does start to get to values where CPUs can take hours or days to perform training tasks in practice.
111176
1
111183
null
1
384
I'm given a large amount of documents upon which I should perform various kinds of analysis. Since the documents are to be used as a foundation of a final product, I thought about building a graph out of this text corpus, with each document corresponding to a node. One way to build a graph would be to use models such as USE to first find text embeddings, and then form a link between two nodes (texts) whose similarity is beyond a given threshold. However, I believe it would be better to utilize an algorithm which is based on plain text similarity measures, i.e., an algorithm which does not "convert" the texts into embeddings. Same as before, I would form a link between two nodes (texts) if their text similarity is beyond a given threshold. Now, the question is: what is the simplest way to measure similarity of two texts, and what would be the more sophisticated ways? I thought about first extracting the keywords out of the two texts, and then calculate Jaccard Index. Any idea on how this could be achieved is highly welcome. Feel free to post links to papers that address the issue. NB: I would also appreciate links to Python libraries that might be helpful in this regard.
Building a graph out of a large text corpus
CC BY-SA 4.0
null
2022-05-21T07:19:36.817
2022-05-28T10:19:05.627
null
null
133072
[ "nlp", "text-mining", "similarity", "graphs", "similar-documents" ]
It looks to me like [topic modeling](https://en.wikipedia.org/wiki/Topic_model) methods would be a good candidate for this problem. This option has several advantages: it's very standard with many libraries available, and it's very efficient (at least the standard [LDA](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) method) compared to calculating pairwise similarity between documents. A topic model is made of: - a set of topics, represented as a probability distribution over the words. This is typically used to represent each topic as a list of top representative words. - for each document, a distribution over topics. This can be used to assign the most likely topic and consider the clusters of documents by topic, but it's also possible to use some subtle similarity between the distribution. The typical difficulty with LDA is picking the number of topics. A better and less known alternative is [HDP](https://en.wikipedia.org/wiki/Hierarchical_Dirichlet_process), which infers the number of topics itself. It's less standard but there are a few implementations (like [this one](https://radimrehurek.com/gensim/models/hdpmodel.html)) apparently. There are also more recent neural topic models using embeddings (for example [ETM](https://github.com/adjidieng/ETM)). --- Update Actually I'm not really convinced by the idea to convert the data into a graph: unless there is a specific goal to this, analyzing the graph version of a large amount of text data is not necessarily simpler. In particular it should be noted that any form of clustering on the graph is unlikely (in general) to produce better results than topic modelling: the latter produces a probabilistic clustering based on the words in the documents, and this usually offers a quite good way to summarize and group the documents. In any case, it would possible to produce a graph based on the distribution over topics by document (this is the most natural way, there might be others). Calculating a pairwise similarity between these distributions would represent closely related pairs of documents with a high-weight edge and conversely. Naturally a threshold can be used to remove edges corresponding to low similarity edges.
Build a corpus for machine translation
What you would typically do in your case is to apply a sentence alignment tool. Some popular options for that are: - hunalign: a classical tool that relies on a bilingual dictionary. - bleualign: it aligns based on the BLEU score similarity - vecalign: it is based on sentence embeddings, like LASER's. I suggest you take a look at the preprocessing applied for the ParaCrawl corpus. In the [article](https://www.aclweb.org/anthology/2020.acl-main.417/) you can find an overview of the most popular methods for each processing step. A different option altogether, as you suggest, is to translate at the document level. However, most NMT models are constrained in the length of the input text they accept, so if you go for document-level translation, you must ensure that your NMT system can handle such very long inputs. An example of NMT system that can be used for document-level NMT out of the box is [Marian NMT](https://marian-nmt.github.io/) with its gradient-checkpointing feature.
111197
1
111202
null
1
235
The below is a picture which denotes the error of an ensemble classifier. Can someone help me understand the notation [](https://i.stack.imgur.com/QWK55.png) What does it mean to have (25 and i) in brackets and what is ε^1 is it error of first classifier or the error rate raised to power i. Can someone explain this formulae.
What is meant by this notation for ensemble classifier error rate
CC BY-SA 4.0
null
2022-05-22T06:40:04.583
2022-05-22T11:12:48.283
null
null
135268
[ "classification", "ensemble-modeling", "notation" ]
$\varepsilon^i$ is the error rate raised to the power i. So for each value i, the formula calculates the probability of i classifiers classifying a sample incorrectly, so for i=13 we have: $$e_{13\ wrong} = {25 \choose 13} \times \varepsilon^{13} \times {(1-\varepsilon)}^{12}$$ Assuming $\varepsilon = 35\%$, and calculating the binomial coefficient gives us: $$e_{13\ wrong} = 5,200,300 \times 0.35^{13} \times 0.65^{12} = 0.035$$ Repeat this for $i = 14, 15, ... , 25$, then sum all the results to get the final answer.
Understanding AC_errorRate loss function
From my understanding, this loss type means: - you define a threshold percentage error (let’s say 2%) - for each true value y, the desired prediction should be between y + 0.02*y and y - 0.02*y - the percentage of predicted values fulfilling the rule above contribute to the “inliers” predictions, I.e., good predictions This idea reminds me to what [RANSAC](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.RANSACRegressor.html) does
111217
1
111220
null
1
113
i'm still new in machine learning. currently i'm creating an anomaly detection for flight data. it is a multivariate time series data that include timestamp, latitude, longitude, velocity and altitude of the aircraft. i'm splitting the data into train and test with 80% ratio. i used the keras LSTM autoencoder to do a anomaly detection. so here's my code ``` def create_sequence(data, time_step = None): Xs = [] for i in range (len(data) - time_step): Xs.append(data[i:(i + time_step)]) return np.array(Xs) # pre-process to split the data dfXscaled, scalerX = scaledf(df, normaltype=normalization) num_train = int(df.shape[0]*ratio) values_dataset = dfXscaled.values train = values_dataset[:num_train, :] test = values_dataset[num_train:, :] # sequence input data [sample, time step, features] train_input = create_sequence(train, time_step = time_step) test_input = create_sequence(test, time_step = time_step) train_time = index_time.index[:num_train] test_time = index_time.index[num_train:] # model model_arch = [] last_layer = num_layers - 1 for x in range(num_layers): if x == last_layer: model_arch.append(tf.keras.layers.LSTM(num_nodes, activation='relu', return_sequences=True, dropout = dropout)) else: model_arch.append(tf.keras.layers.LSTM(num_nodes, activation='relu', input_shape=(time_step, 4), dropout = dropout)) model_arch.append(tf.keras.layers.RepeatVector(time_step)) model_arch.append(tf.keras.layers.TimeDistributed(tf.keras.layers.Dense(4))) model = tf.keras.models.Sequential(model_arch) opt= tf.keras.optimizers.SGD(learning_rate=learning_rate) model.compile(loss=tf.keras.losses.Huber(), optimizer=opt, metrics=[tf.keras.metrics.MeanAbsolutePercentageError(name='mape'), tf.keras.metrics.RootMeanSquaredError(name='rmse'), "mae", 'accuracy']) history = model.fit(train_input, train_input, epochs=epochs, batch_size = num_batch, validation_data=(test_input, test_input), verbose=2, shuffle=False) ``` when i do a model evaluation, it come up with 100% accuracy [](https://i.stack.imgur.com/IiGFB.png) is it good to have 100% accuracy ? or my model is overfitting the data ?
is it good to have 100% accuracy on validation?
CC BY-SA 4.0
null
2022-05-23T03:56:45.223
2022-05-23T06:09:31.590
2022-05-23T04:00:48.693
136069
136069
[ "keras", "lstm", "anomaly-detection" ]
Usually indicates something is wrong. In your case, things which do not seem right: - One can easily get ~100% accuracy in anomaly detection - just keep predicting the majority class. - Is this model really for anomaly detection? Anomaly detection is a classification problem, but your metrics (MAPE, RootMeanSquaredError etc.) are regression metrics.
Accuracy over 100%
I solved in this way: ``` #original #pred = output.argmax(dim=1,keepdim=True) #my solution _, pred = torch.max(output, dim=1) ``` I do not know why, buy my solution it works. If someone has an intuition can explain me why this works? Thanks
111239
1
111243
null
0
28
I am writing a function to standardize the data and I found out that we can choose either ddof = 0 or ddof = 1, so I got confused that which one to choose and why? Does this make any difference?
what degree of fredom one should use while calculating standard diviation for standardizing data
CC BY-SA 4.0
null
2022-05-23T16:11:12.843
2022-05-24T10:11:11.637
2022-05-24T03:41:13.870
29169
132421
[ "statistics", "data-cleaning" ]
ddof represents the degrees of freedom adjustment when you subtract from N in the standard deviation formula below. So I suggest that if you are working with a sample from a population, and you want an unbiased estimate then you use ddof=1. If you want to consider it as a population you can use ddof=0. $$ S= \sqrt{ \dfrac{1}{N-1}\sum_{i=1}^N \bigg( X_i-\bar X \bigg)^2 } $$
Standardization and Normalization
Whenever you have features that they have different scale and it is significant for some features, you should standardize your feature. Take a look at [here](https://stats.stackexchange.com/q/207108/179078).
111248
1
111290
null
0
40
I'm a psychology student and trying come up with a research plan involving GLM. I'm thinking about adding an interaction term in the analysis but I'm unsure about the interpretation of it. To make things simple, I'm going to use linear regression as an example. I'm expecting a (simplified) model like this: $$y = ax_{1} + bx_{2} + c(x_{1}*x_{2})+e$$ In my hypothesis, $x_{1}$ and $y$ are negatively correlated, and $x_{2}$ and $y$ are positiely correlated. As for correlation between $x_{1}$ and $x_{2}$, it is unknown. Now the question is, if we make a model and get a coefficient $c$, how can we interpret it, whether it's positive or negative? The reason I'm confused is that $x_{1}$ and $x_{2}$ have different effects interms of direction (positive or negative) towards $y$. Do I have to make $x_{1}$ or $x_{2}$ into a reciprocal so that both variables have the same directional effects towards $y$? Another possibility that I can think of is that $c$ it self does not explain the whole of interaction effect and another test needs to be run to specify that. Thank you in advance.
Interpreting interaction term coefficient in GLM/regression
CC BY-SA 4.0
null
2022-05-24T06:17:58.163
2022-05-25T13:58:37.597
null
null
134712
[ "regression", "statistics", "glm" ]
if we make a model and get a coefficient c, how can we interpret it, whether it's positive or negative? One key issue on interaction variables is interpretation. Let's remember that we're usually looking for marginal effects (as $dy/dx_1$ or $dy/dx_2$). Therefore the (estimated) derivative of each is $a + cx_2$ and $b + cx_1$ respectively, which means that the change is not constant, but dependent on the values of $x_2$ and $x_1$. We can rewrite the derivatives condition as $dy/dx_1=a + cx_2<0$ and $dy/dx_2 = b + cx_1 >0$. There are many ways to interpret this. For example, let's suppose $x_1$ and $x_2$ are strictly increasing and positive and $c$ turn out to be positive. In that case, $a$ has to be really negative for the inequation to hold for every value of $x_2$ (i.e. $a < -cx_2$). So, in this type of models, coefficient interpretation is not as straight-forward as in linear (in variables) models. So, $c$ could be either positive or negative. That's why you need to verify if the combination of ($a,c$) or ($b,c$) give positive slopes (derivatives) or not. Geometrics come very handy in this case. Do I have to make x1 or x2 into a reciprocal so that both variables have the same directional effects towards y? No, you don't need to. Though, it could help the interpretation a little bit. Another possibility that I can think of is that c it self does not explain the whole of interaction effect and another test needs to be run to specify that. In your example, $c$ does capture the interaction effect. IF you're willing to test if $c=0$, or not, is a different test (rather that the "sign test" done in the previous question). If $c$ is statistically insignificant ($c=0$) then interaction effect is null and you could interpret this model as a simple linear one, requiring that $a<0$ and $b>0$.
How to interpret coefficients from logistic regression?
They are all significant but for certain thing. What do I mean? You are predicting evidence, i.e. first column of the following picture: [](https://i.stack.imgur.com/ZTi2S.png) In other words you have "linear regression part"+ instead of y you have evidence. So changing values of independnet variable X (positive or negative) will influence different binary class (0 or 1), hence different values are significant for different thing. (they add some info)
111267
1
111275
null
0
50
# Intro I need an input file of 5 letter English words to train my Bayesian model to infer the stochastic dependency between each position. For instance, is the probability of a letter at the position 5 dependent on the probability of a letter at position 1 etc. At the end of the day, I want to train this Bayesian network in order to be able to solve the Wordle game. ## What is Wordle? It’s a game where you guess 5 letter words, and it tells you how many letters you got correct and if they are in the right positions or not. You only have six attempts. Concluding, Wordle is about narrowing down the distribution of what the true word could be. # Problem What requirements should such a words list meet? - Should I mix US and British english? - Should I include all possible words? Even very exotic ones that nobody knows/uses? - Should these words be processed/normalized in some way? - Does it make sense to use multiple sources? Is there any way to ensure the completeness and correctness? # What I have did so far - I modeled the Bayesian network consisting of 5 random variables for each letter at each position: $L1$, $L2$, $L3$, $L4$, $L5$ - I came to the conclusion that the marginal probability of the searched word is $P(L1, L2, L3, L4, L5).$ - In order to calculate the joint probability distribution I need a word list, so I asked myself the aboved questions - I've found many sources for word lists, but I'm not sure if I should use one or all - I have verified that both US and British English spellings occurred in the Wordle. PS: I know that the list of all possible solution words has been leaked. But I don't want to use such a list, because what if the makers of Wordle change the list again?
What are the requirements for a word list to be used for Bayesian inference?
CC BY-SA 4.0
null
2022-05-24T18:55:51.103
2022-05-26T06:53:03.493
2022-05-26T06:53:03.493
136141
136141
[ "machine-learning", "statistics", "bayesian", "bayesian-networks", "inference" ]
Well, it totally depends what you want to do with the resulting probability model. If you're planning to use the model for spelling correction for example, you should probably use a vocabulary as large as the kind of text you're expecting to process. In general this is actually done not from a list of words but from a large corpus of text, taking all the n-grams up to 5 in the text into account. It's possible that restricting to 5 letters word would not represent the same probabilities than in the full language. But again this choice depends what is the target task for the model. --- Answering the updated questions: Ideally, you would use the same database as Wordle itself. I'm not sure but as you said this could backfire if the list is changed later. As far as I know (I happen to play it too!), the game seems to work with fairly standard English vocabulary, so I'm guessing that any standard vocabulary would fit. > Should I mix US and British english? I don't know. > Should I include all possible words? Even very exotic ones that nobody knows/uses? For the sake of completeness I think you can, but your model could include the global probability of word in order to make standard words more likely than rare words. An option that comes to mind is to use the [Google NGrams data](https://storage.googleapis.com/books/ngrams/books/datasetsv3.html) (here unigrams) and extract only five letters words. > Should these words be processed/normalized in some way? Only for capitalization, I think. > Does it make sense to use multiple sources? Is there any way to ensure the completeness and correctness? This could be tricky, because mixing different sources can cause some bias in the n-grams probabilities.
Selecting most relevant word from lists of candidate words
There are many ways you could approach this problem - Word embeddings If you have word embeddings at hand, you can look at the distance between the tags and the bucket and pick the one with the smallest distance. - Frequentist approach You could simply look at the frequency of a bucket/tag pair and choose this. Likely not the best model, but might already go a long way. - Recommender system Given a bucket, your goal is to recommend the best tag. You can use collaborative filtering or neural approaches to train a recommender. I feel this could work well especially if the data is sparse (i.e. lots of different tags, lots of buckets). The caveat I would see with this approach is that you would technically always compare all tags, which only works if tag A is always better than tag B regardless of which tags are proposed to the user. - Ranking problem You could look at it as a ranking problem, I recommend reading [this blog](https://medium.com/@nikhilbd/intuitive-explanation-of-learning-to-rank-and-ranknet-lambdarank-and-lambdamart-fe1e17fac418) to have a better idea of how you can train such model. - Classification problem This becomes a classification problem if you turn your problem into the following: given a bucket, and two tags (A & B), return 0 if tag A is preferred, 1 if tag B is preferred. You can create your training data as every combination of two tags from your data, times 2 (swap A and B). The caveat is that given N tags, you might need to do a round-robin or tournament approach to know which tag is the winner, due to the pairwise nature. - Recurrent/Convolutional network If you want to implicitly deal with the variable-length nature of the problem, you could pass your tags as a sequence. Since your tags have no particular order, this creates a different input for each permutation of the tags. During training, this provides more data points, and during inference, this could be used to create an ensemble (i.e. predict a tag for each permutation and do majority voting). If you believe that it matters in which order the tags are presented to the user, then deal with the sequence in the order it is in your data. Your LSTM/CNN would essentially learn to output a single score for each item, such that the item with the highest score is the desired one.
111282
1
111444
null
3
182
I am trying to assign costs to the confusion matrix. That is, in my problem, a FP does not have the same cost as a FN, so I want to assign to these cases a cost "x" so that the algorithm learns based on those costs. I will explain my case a little more with an example: - When we want to detect credit card fraud, it does not have the same cost to predict that it is not fraud when in fact it is than the other way around. In the first case, the cost would be much higher. What I wanted to know is if there is a library in R in which I can assign costs to these wrong decisions (i.e. give a cost to each possible value of the confusion matrix) or if there is an algorithm that learns based on a cost/benefit matrix. I could also use some way to implement this without the use of a library. Thank you very much.
How to assign costs to the confusion matrix
CC BY-SA 4.0
null
2022-05-25T08:48:42.313
2022-06-02T07:45:09.830
2022-05-25T10:46:41.983
119136
119136
[ "machine-learning", "r", "model-evaluations", "cost-function" ]
In this case, you can consider a payoff matrix to use average profit as your evaluation metric. It is important to mind the difference between the cost function (used in the learning process of the algorithm during training, which must be also differentiable with gradient-based optimization) and the performance metric (which is what I think you should consider in this case, your average profit for instance). In case you have the following confusion matrix: [](https://i.stack.imgur.com/uIux3.png) you can assign a benefit/loss per quadrant of your matrix: [](https://i.stack.imgur.com/10TzL.png) and in this case, your evaluation metric (to select your best model): $ Average Profit = \frac{(TN number*TN benefit + TP number*TP benefit - FN number*FN losses - FP number*FP losses)}{Number Of Predictions}$ This metric can also be used to select the optimal threshold of your model: [](https://i.stack.imgur.com/1WL5H.png) In case you want to feed you training algorithm to an XGB in R for instance, have a look at the documentation, where you have an argument called feval, to include your custom evaluation metric function ([documentation link](https://cran.r-project.org/web/packages/xgboost/xgboost.pdf)): > feval --> customized evaluation function. Returns list(metric='metric-name',value='metric-value') with given prediction and dtrain
Info obtained from a confusion matrix
First, let's be clear about the fact that all these measures are only for evaluating binary classification tasks. The way to understand the differences is to look at examples where the number of instances is (very) different in the two classes, either the true classes (gold) or predicted classes. For instance imagine a task to detect cities names among the words in a text. It's not very common, so in your test set you may have 1000 words, only 5 of them are cities names (positive). Now imagine two systems: - Dummy system A which always says "negative" for any word - Real system B (e.g. which works with a dictionary of cities names). Let's say that B misses 2 real cities and mistakenly identifies 8 other words as cities. System A gets an accuracy of 995/1000 = 99.5%, even though it does nothing. System B has 990/1000=99.0%. It looks like A is better, that's why accuracy rarely gives the full picture. Precision represents how correct a system is in its positive predictions: system A always says negative so it has 0% precision. System B has 3/11 = 27%. Recall represents the proportion of true positive instances which are retrieved by a system: system A doesn't retrieve anything so it has 0% recall. System B has 3/5 = 60%. F1-score is a way to have a single value which represents the harmonic mean of the precision and recall. It's used as a "summary" of these two values, which is convenient when one needs to order different systems by their performance. The choice of an evaluation measure depends on the task: for instance, if predicting a FN has life-threatening consequences (e.g. cancer detection), then recall is crucial. If on the contrary it's very important to avoid FP cases, then precision makes more sense (say for instance if an automatic missile system would mistaken identify a commercial flight as a threat). The most common case though is certainly F1-score (or more generally F$\alpha$-score), which is suited to most binary classification tasks.
111291
1
111293
null
1
5415
I have two dataframes, df1 and df2, both with different number of rows. df1 has a column 'NAME', a short string; and df2 has a column 'LOCAL_NAME', a much longer string that may contain the exact contents of df1.NAME. I want to compare every entry of df1.NAME with every entry in df2.LOCAL_NAME, and if df1.NAME appears in a particular entry of df2.LOCAL_NAME, I want to create add an entry in a new column df2.NAME_MAP = df1.NAME. If it doesn't appear in the long string df2.LOCAL_NAME, the corresponding entry in df2.NAME_MAP will be df2.LOCAL_NAME For now, efficiency is not an issue. Here are sample datasets. ``` df1 = pd.DataFrame({ "NAME" : ['222', '111', '444', '333'], "OTHER_COLUMNS": [3, 6, 7, 34] }) df2 = pd.DataFrame({ "LOCAL_NAME": ['aac111asd', 'dfse222vdsf', 'adasd689as', 'asdv444grew', 'adsg243df', 'dsfh948dfd'] }) ``` df1: |NAME |OTHER_COLUMNS | |----|-------------| |'222' |3 | |'111' |6 | |'444' |7 | |'333' |34 | df2: |LOCAL_NAME | |----------| The goal is to create another column in df2 called NAME_MAP which has the value of df.NAME if that string is contained exactly in the larger df2.LOCAL_NAME string. df2 would now look like this: |LOCAL_NAME |NAME_MAP | |----------|--------| |'aac111asd' |'111' | |'dfse222vdsf' |'222' | |'adasd689as' |'adasd689as' | |'asdv444grew' |'444' | |'adsg243df' |'adsg243df' | |'dsfh948dfd' |'dsfh948dfd' | Then I can join the two dataframes on NAME_MAP: |LOCAL_NAME |NAME_MAP |NAME (from df1) |OTHER_COLUMNS (from df1) | |----------|--------|---------------|------------------------| |'aac111asd' |'111' |'111' |6 | |'dfse222vdsf' |'222' |'222' |3 | |'adasd689as' |'adasd689as' |NaN |NaN | |'asdv444grew' |'444' |'444' |7 | |'adsg243df' |'adsg243df' |NaN |NaN | |'dsfh948dfd' |'dsfh948dfd' |NaN |NaN | How do I go about trying to do this string comparison in two datasets of different sizes?
Compare string entries of columns in different pandas dataframes
CC BY-SA 4.0
null
2022-05-25T14:03:45.347
2022-05-25T15:02:26.623
2022-05-25T14:39:57.140
136178
136178
[ "python", "pandas" ]
Here's a way to solve it Create a df with cartesian product of both dataframes such as here : [https://stackoverflow.com/questions/53907526/merge-dataframes-with-the-all-combinations-of-pks](https://stackoverflow.com/questions/53907526/merge-dataframes-with-the-all-combinations-of-pks) ``` cp = df2.assign(key=0).merge(df1.assign(key=0), how='left') ``` Keep only the lines where NAME is in LOCAL NAME (just print cp after that so you understand what's done) ``` cp['key'] = [1 if x in y else 0 for x,y in zip(cp['NAME'],cp['LOCAL_NAME'])] cp = cp[cp['key'] == 1].drop(['key'], axis=1) ``` Merge, and fill the ones without combination by the local name ``` df2 = df2.merge(cp, how='left', on='LOCAL_NAME') df2['NAME'] = df2['NAME'].fillna('') df2['NAME'] = [y if x == '' else x for x,y in zip(df2['NAME'],df2['LOCAL_NAME'])] ``` Result : ``` LOCAL_NAME NAME OTHER_COLUMNS 0 aac111asd 111 6.0 1 dfse222vdsf 222 3.0 2 adasd689as adasd689as NaN 3 asdv444grew 444 7.0 4 adsg243df adsg243df NaN 5 dsfh948dfd dsfh948dfd NaN ```
Pandas/Python - comparing two columns for matches not in the same row
To clarify, this question is about comparing two columns to check if the 3-letter combinations match. So, I would approach this in the following manner: ``` # Extract the 3-letter combinations from column a df3["a normalised"] = df3["a"].str[:3] # Then check if what is in `a normalised` is in column b b_matches = list(df3[df3[“b”].isin(list(df3[“a normalised”]))][“b”].unique()) df3.loc[:, "match"] = False b_match_idx = df3[df3["a normalised"].isin(b_matches)].index df3.at[np.array(b_match_idx),"match"] = True ``` EDIT: The parentheses have now been resolved. Also the .loc warning can now be mitigated.
111292
1
111361
null
1
69
I have a small medical dataset (200 samples) that contains only 6 cases of the condition I am trying to predict using machine learning. So far, the dataset is not proving useful for predicting the target variable and is resulting in models with 0% recall and precision, probably due to how small the dataset is. However, in order to learn from the dataset, I applied Feature Selection techniques to deduct what features are useful in predicting the target variable and see if this supports or contradicts previous literature on the matter. However, when I reran my models using the reduced dataset, this still resulted in 0% recall and precision. So the prediction performance has not improved. But the features returned by the applying Feature Selection have given me more insight into the data. So my question is, is the purpose of Feature Selection: - to improve prediction performance - or can the purpose be identifying relevant features in the prediction and learning more about the dataset So in other words, is Feature Selection just a tool for improved performance, or can it be an end in itself? Also, if using the subset of features returned by Feature Selection methods does not improve the accuracy or recall of the model how can I demonstrate that these feature are indeed relevant in my prediction? If you can link some resources about this issue that would be very useful. Thank you.
What is the Purpose of Feature Selection
CC BY-SA 4.0
null
2022-05-25T14:25:31.650
2022-05-27T15:00:11.403
2022-05-27T13:21:03.867
99648
99648
[ "machine-learning", "python", "feature-selection", "dimensionality-reduction" ]
You partially answered your own question. Feature selection is for gaining insight into your problem, regardless of whether or not it is actually used in a model. This is particularly important when using a small number of features, as you have stated, since you might expect importance to surface when doing modeling. However if it is contrary to what you expect, that is important as well, since it might indicate problems with sample size, measurement, etc. Feature selection can also be used to improve performance, if you downplay interpretability, if you are willing to monitor the model, and optimize it when it degrades. The difference between the two is that if you choose the 2nd method, and your model degrades, I think you will need to explain what is happening in terms of interpretability, or just reoptimize it and 'hope for the best' (not recommended). Many times companies don't care in your model is performing well, but will begin to question if it is not. In the first case, you will always have an interpretable model, with (hopefully) acceptable performance. There are also techniques such as Lasso regression which enables you to perform some optimization, by shrinking the coefficient to an 'interpretation level' that is acceptable. So both explainable AND performance is used nowadays for feature selection. Choice often depends upon the specific type of problem. Modeling for social and health issues require interpretation, while 'big data' types of problems often call for performance enhancing feature selection
Is feature selection necessary?
[Feature selection](https://en.wikipedia.org/wiki/Feature_selection) might be consider a stage to avoid. You have to spend computation time in order to remove features and actually lose data and the methods that you have to do feature selection are not optimal since the problem is [NP-Complete](https://en.wikipedia.org/wiki/NP-completeness). Using it doesn't sound like an offer that you cannot refuse. So, what are the benefits of using it? - Many features and low samples/features ratio will introduce noise into your dataset. In such a case your classification algorithm are likely to overfit, and give you a false feeling of good performance. - Reducing the number of features will reduce the running time in the later stages. That in turn will enable you using algorithms of higher complexity, search for more hyper parameters or do more evaluations. - A smaller set of feature is more comprehendible to humans. That will enable you to focus on the main sources of predictability and do more exact feature engineering. If you will have to explain your model to a client, you are better presenting a model with 5 features than a model with 200 features. Now for your specific case: I recommend that you'll begin in computing the correlations among the features and the concept. Computing correlations among all features is also informative. Note that there are many types of useful correlations (e.g., [Pearson](https://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient), [Mutual information](https://en.wikipedia.org/wiki/Mutual_information)) and many attributes that might effect them (e.g., sparseness, concept imbalance). Examining them instead of blindly go with a feature selection algorithm might save you plenty of time in the future. I don't think that you will have a lot of running time problems with your dataset. However, your samples/features ratio isn't too high so you might benefit from feature selection. Choose a classifier of low complexity(e.g., linear regression, a small decision tree) and use it as a benchmark. Try it on the full data set and on some dataset with a subset of the features. Such a benchmark will guid you in the use of feature selection. You will need such guidance since there are many options (e.g., the number of features to select, the feature selection algorithm) an since the goal is usually the predication and not the feature selection so feedback is at least one step away.
111296
1
111297
null
1
798
i hope you are doing well , i want to ask a question regarding loss function in a neural network i know that the loss function is calculated for each data point in the training set , and then the backpropagation is done depending on if we are using batch gradient descent (backpropagation is done after all the data points are passed) , mini-batch gradient descent(backpropagation is done after batch) or stochastic gradient descent(backpropagation is done after each data point). now let's take the MSE loss function : [](https://i.stack.imgur.com/Xxc9W.png) how can n be the number of data points ?, because if we calculate the loss after each data point then n would be only 1 everytime. also i saw a video in where they put n as the number of nodes in the output layer. link to video( you can find what i'm talking about in 5:45) : [https://www.youtube.com/watch?v=Zr5viAZGndE&t=5s](https://www.youtube.com/watch?v=Zr5viAZGndE&t=5s) therefore iam pretty confused on how we calculate the loss function ? and what does n represent? also when we have multiple inputs, will we only be concerned with the output that the weight we are trying to update influence ? thanks in advance
how to calculate loss function?
CC BY-SA 4.0
null
2022-05-25T15:10:17.127
2022-05-25T15:26:47.470
null
null
136180
[ "deep-learning", "neural-network", "loss-function", "gradient-descent", "mse" ]
As the image says, n represents the number of data points in the batch for which you are currently calculating the loss/performing backpropagation. In the case of batch gradient descent this would be the number of observations in the complete dataset, in the case of mini-batch gradient descent this would be equal to the batch size (or lower if you are using an incomplete batch of data), or 1 in the case of stochastic gradient descent. The reason that the video talks about summing the error over the number of nodes in the output layer is because in their example they are using a network with multiple output nodes, whereas MSE is generally used for regression problems where you are only using a single output node (see for example also [this question](https://datascience.stackexchange.com/questions/84293/how-is-calculated-the-error-with-multiple-output-neurons-in-neural-network)). A network that uses multiple inputs does not have an impact on how the loss is calculated, in addition because of the chain rule used in backpropagation the algorithm only looks at the partial derivative of the loss with respect to a single weight/bias.
Is it a acceptable way to write a loss function in this form?
It just means to sum over all $x_i$ in $M$. That is completely acceptable notation.
111299
1
111418
null
0
63
I have a problem. I want to predict when the customer will place another order in how many days if an order comes in. I have already created my target variable `next_day_in_days`. This specifies in how many days the customer will place an order again. And I would like to predict this. Since I have too few features, I want to do feature engineering. I would like to specify how many orders the customer has placed in the last 90 days. For example, I have calculated back from today's date how many orders the customer has placed in the last 90 days. Is it better to say per row how many orders the customer has placed? Please see below for the example. So does it make more sense to calculate this from today's date and include it as a feature or should it be recalculated for each row? ``` customerId fromDate next_day_in_days 0 1 2021-02-22 24 1 1 2021-03-18 4 2 1 2021-03-22 109 3 1 2021-02-10 12 4 1 2021-09-07 133 8 3 2022-05-17 61 10 3 2021-02-22 133 11 3 2021-02-22 133 ``` Example ``` # What I have customerId fromDate next_day_in_days purchase_in_last_90_days 0 1 2021-02-22 24 0 1 1 2021-03-18 4 0 2 1 2021-03-22 109 0 3 1 2021-02-10 12 0 4 1 2021-09-07 133 0 8 3 2022-05-17 61 1 10 3 2021-02-22 133 1 11 3 2021-02-22 133 1 # Or does this make more sense? customerId fromDate next_day_in_days purchase_in_last_90_days 0 1 2021-02-22 24 1 1 1 2021-03-18 4 2 2 1 2021-03-22 109 3 3 1 2021-02-10 12 0 4 1 2021-09-07 133 0 8 3 2022-05-17 61 1 10 3 2021-02-22 133 0 11 3 2021-02-22 133 0 ```
Create features for each row or only for a specific value
CC BY-SA 4.0
null
2022-05-25T16:32:55.220
2022-05-30T08:43:10.193
null
null
130860
[ "machine-learning", "regression", "feature-engineering", "features" ]
You should use option number 2, it is the only one that actually make sense statistically speaking. If you use option 1, you are making your model dependent on its training date, which makes no sense. Indeed, the `purchase_in_last_90_days` feature you build this way depends on the training day's date (which you refer to as today's date in your question). As an example, if customer 1 place an order tonight, your `purchase_in_last_90_days` will be different tomorrow for this customer (equal to 1). ``` # Training today customerId fromDate next_day_in_days purchase_in_last_90_days 0 1 2021-02-22 24 0 1 1 2021-03-18 4 0 2 1 2021-03-22 109 0 3 1 2021-02-10 12 0 4 1 2021-09-07 133 0 8 3 2022-05-17 61 1 10 3 2021-02-22 133 1 11 3 2021-02-22 133 1 # Training tomorrow if customer #1 places an order tonight customerId fromDate next_day_in_days purchase_in_last_90_days 0 1 2021-02-22 24 1 1 1 2021-03-18 4 1 2 1 2021-03-22 109 1 3 1 2021-02-10 12 1 4 1 2021-09-07 133 1 8 3 2022-05-17 61 1 10 3 2021-02-22 133 1 11 3 2021-02-22 133 1 ``` So your training data - and hence your model - implicitly becomes a function of the training date, because `purchase_in_the_last_90_days` will be dependent on the training date, all other things equals. And the training date is clearly not a feature that should be used to predict your target value! Furthermore, using option 1 you include information in the training data that is posterior to the prediction date. Keep in mind that your training set simulates what you will have at inference time, so you should consider that for each row, the `fromDate` from the training set represents the prediction date (i.e. "today's date"). And you should only use information available prior to this `fromDate`. The `purchase_in_last_90_days` calculated with option 2 (number of purchase in the 90 days before prediction date) is a feature that you can build at inference time with the data you have available and that we can reasonably assume to be relevant to the model. So use option 2!
Create new rows based on a value in a column
The easiest way of doing this is probably to first convert the dataframe back to a list of rows, then use base python syntax to repeat each row n times, and then convert that back to a dataframe: ``` import pandas as pd df = pd.DataFrame({ "event": ["A","B","C","D"], "budget": [123, 433, 1000, 1299], "duration_days": [6, 3, 4, 2] }) pd.DataFrame([ row # select the full row for row in df.to_dict(orient="records") # for each row in the dataframe for _ in range(row["duration_days"]) # and repeat the row for row["duration"] times ]) ``` Which gives the following dataframe: |event |budget |duration_days | |-----|------|-------------| |A |123 |6 | |A |123 |6 | |A |123 |6 | |A |123 |6 | |A |123 |6 | |A |123 |6 | |B |433 |3 | |B |433 |3 | |B |433 |3 | |C |1000 |4 | |C |1000 |4 | |C |1000 |4 | |C |1000 |4 | |D |1299 |2 | |D |1299 |2 |
111312
1
111406
null
1
31
I've been asked to review a paper in which the authors compare their new model (let's call it Model A) to other models (B, C, and D), and conclude theirs is superior on some metric (I know, big surprise!). Here's the problem: in my research, my supervisors always instructed me to code up the competing models and compare my model that way. The paper I'm reviewing, by contrast, just quotes results from previous literature. To clarify, here's what I would have had to do if I had been these authors: - Code up model A. - Code up models B, C, and D - Run all models on the data set, and obtain metrics to compare the models. Whereas this is what the authors did: - Code up model A. - Look up the results in published literature for models B, C, and D on the same data set to obtain metrics. - Run the data through model A, and obtain the metric to compare against models B, C, and D. Is their method incorrect, or somehow unethical? They make no claims regarding training time.
Reviewing a paper - common practice
CC BY-SA 4.0
null
2022-05-26T01:54:53.603
2022-05-29T16:53:19.230
null
null
31434
[ "data-science-model" ]
In theory, their method is correct as long as the experiment is exactly equivalent: - Exact same dataset, same proportion of training data and preferably even exact same training/test data (i.e. same split if there is a split). - Identical preprocessing, if there is any preprocessing. - Identical methodology with respect to: hyper-parameter tuning, any feature selection, etc. any experimental setup such as number of epochs/iterations for training, etc. - (anything else that I may have forgotten...) Since it's often difficult to make sure in practice that the experimental design is equivalent, you're right that redoing all the other experiments is the safest way to guarantee this equivalence. It also has the additional advantage to reproduce the original results (normally they should be confirmed, but it could happen that they don't). This is not unethical by itself (btw the ethics and review part of the question is more relevant on [AcademiaSE](https://academia.stackexchange.com/)), it's only a methodology which is not optimal. About your review: if the paper gives all the relevant details and does everything possible to make sure the experimental design is equivalent (showing that the authors understand the potential issue here), I would barely mention this point and not really hold it against them in my evaluation. On the contrary, if they happily compare results neglecting to check that the design is equivalent, I would mention it and count this as a significant limitation of the work (but not necessarily reject only for this reason, depends if the rest of the work is convincing).
Can I treat text review analysis as a regression problem?
Yes, regression makes sense. Many MLer working on similar tasks (e.g. the Yelp challenge) use classification instead of regression because they collapsed the label space to 2 (positive or negative) or 3 (adding neutral). For predicting numerical scores 1~5 regression makes more sense.
111342
1
111357
null
1
32
I already referred these posts [here](https://stats.stackexchange.com/questions/19216/variables-are-often-adjusted-e-g-standardised-before-making-a-model-when-is) and [here](https://stats.stackexchange.com/questions/21222/are-mean-normalization-and-feature-scaling-needed-for-k-means-clustering). I also posted here but since there is no response, am posting here. Currently, I am working on customer segmentation using their purchase data. So, my data has below info for each customer [](https://i.stack.imgur.com/3btEe.png) Based on the above linked posts I see that for clustering, we have to scale the variables if they are in different units etc. But if I scale/normalize all of them to uniform scale, wouldn't I lose the information that actually differentiates the customers from one another? But I also understand that monetary value could construed as high weight model because they might go upto range of 100K or millions as well. Let's assume that I normalized and my clustering returned 3 clusters. How do I answer below questions meaningfully? q1) what is the average revenue from customers who are under `cluster 1`? q2) what is the average recency (in days) for a customer from cluster 2? q3) what is the average age of customer with us (tenure) under cluster 3? Response to all the above question using normalized data wouldn't make sense because they ll amight be in a unform scale mean 0, sd 1 etc So, I was wondering whether it is meaningful to do the below a) cluster using normalized/scaled variables b) Once clusters are identified, use `customer_id` under each cluster to get the original variable value (from input dataframe before normalization) and make inference or interpret clusters? So, do you think it would allow me to answer my questions in a meaningful way Is this how data scientists interpret clusters? they always have to link back to input dataframe?
Interpreting cluster variables - raw vs scaled
CC BY-SA 4.0
null
2022-05-26T20:56:04.190
2022-05-27T12:35:41.557
null
null
64876
[ "machine-learning", "clustering", "data-mining", "predictive-modeling", "k-means" ]
A simple way to estimate the loss of data due to the normalization/scaling, is to apply the inverted algorithm to see how different it is from the raw data. If the data loss is very low (ex: 0.1%), scaling is not an issue. On the other hand, if your clusterization works very well for 10k customers, it shall work well for 1 million. Generally speaking, it is better to have a very good model on a small random dataset and then increase it progressivelly until you reach the production scale. You can either make clusters from one feature, or several features. Due to the problem complexity, it is generally better to start with one feature, and then extend to several features. Making clusters from several features works better with dimensional reduction algorithms (ex: UMAP), because you project all your dimensions in a 2D plan automatically and make interesting correlation studies for all customers. If you apply a good multi dimensional clustering, all the features are taken into account and every point is represented by a customer id. If you select a cluster through a cluster technique (ex: DBSCAN), you just have to extract the list of the customers from this cluster, filter the raw data with this list, and start your data analysis to answer q1,q2 or q3. Note that normalization depends on the dimensional reduction algorithm you are using. UMAP wouldn't require data normalisation, whereas t-SNE or PCA requires it. [https://towardsdatascience.com/tsne-vs-umap-global-structure-4d8045acba17](https://towardsdatascience.com/tsne-vs-umap-global-structure-4d8045acba17) Finally, clusters' interpretation should be backed by actual proofs: even if algorithms are often very efficient in clustering data, it is crucial to add indicators to check if the data has been well distributed (for instance comparing mean or standard deviation values between clusters). In some cases, if the raw data have a too wide distribution, it could be interesting to apply a log but you might loss information.
Interpret clustering results after variable transformation
Very interesting! What you did to your data is simply a feature mapping/transformation. So how this affects the clustering results? Clustering is not a clearly defined problem but at least we know something about it: It's about internal similarities (patterns) so these similarities should be maintained through the feature transformation. In your example if you found clusters in transformed space, it shows that you have had clusters in original space as well. You just couldn't see them according to the algorithm you used in that space! For instance if you use Kernelized versions of algorithms you easily find that what they do is nothing but what you did as transformation. They first use a kernel to map the data into new space and then use the algorithm in that space (of course with a bit of theoretical differences/constraints). To summarize, no transformation produces a fake pattern in the data. In worst case it vanishes the original pattern and in the best case it reveals the pattern which was not visible originally (which is your case). --- I mentioned Fake Pattern above so let me say a bit more on it. I think there is a fundamental issue concerning your question: You assume that there is a Right clustering that you got after transformation. Actually there is no right clustering! We do not have fake pattern! If there is a pattern in a feature space, then that is true! i.e. you found an interesting representation of your data. If it does not match with the labels then either the data is very noisy or wrong features have been chosen to represent classes (maybe more reasons. just these two came to my mind now). If there is no label (your case) be sure there is a correlation between features of those cluster members.
111363
1
111394
null
0
21
I'm supposed to find an algorithm that, given a bunch of points on the Euclidean plane, I have to return the tightest (smallest) origin centered upright equilateral triangle that fits all the given points inside of it, in a way that if I input some random new point, the algorithm will return $+$ if the point is inside the triangle and $-$ if not. [](https://i.stack.imgur.com/67GQ9.png) Someone has suggested me to go over all the possible points and find the point with the largest Euclidean distance from the origin, then, say the point is $(x_1,x_2)$, I should calculate the following: $(x_1,x_2)⋅(\frac{\sqrt{3}}{2},-\frac{1}{2})=x_{1}\cdot\frac{\sqrt{3}}{2}+x_{2}\cdot-\frac{1}{2}=r_{1}$ $(x_1,x_2)⋅(-\frac{\sqrt{3}}{2},-\frac{1}{2})=x_{1}\cdot-\frac{\sqrt{3}}{2}+x_{2}\cdot-\frac{1}{2}=r_{2}$ $(x_1,x_2)⋅(0,1)=x_{1}\cdot0+x_{2}\cdot1=r_{3}$ Then take the maximum of $r_1,r_2,r_3$, denote it $r_{max}$ and given a new random point $(y_1,y_2)$ output $+$ if $(y_1,y_2)⋅(\frac{\sqrt{3}}{2},-\frac{1}{2})\le r_{max}$ $(y_1,y_2)⋅(\frac{\sqrt{3}}{2},-\frac{1}{2})\le r_{max}$ $(y_1,y_2)⋅(0,1)\le r_{max}$ It should look something like this: ![](https://i.stack.imgur.com/IA5Gx.png) and output this triangle: [](https://i.stack.imgur.com/sTZvu.png) Now when I try to graph points with the same Euclidean distance on a graph, they do indeed seem to be on the sides of the same origin centered upright equilateral triangle, but I can get different $r$ values for different points which have the same Euclidean distance, so I'm quiet baffled as to how it is supposed to work, if this method even works..
Finding the tightest (smallest) triangle that fits all points
CC BY-SA 4.0
null
2022-05-27T17:06:46.633
2022-05-31T04:59:33.920
2022-05-31T04:59:33.920
135707
134626
[ "machine-learning", "algorithms", "pac-learning" ]
There appear to be a few issues with this approach. - In the second step you have $(y_1,y_2)⋅(\frac{\sqrt{3}}{2},-\frac{1}{2})≤r_{max}$ twice. Presumably one of these should be $(y_1,y_2)⋅(-\frac{\sqrt{3}}{2},-\frac{1}{2})≤r_{max}$. - The vectors you have used give you an upside-down triangle, not an upright one. - Points with the same Euclidean distance from the origin form a circle. So you get a different enclosing triangle (so hence a different value for $r$) depending on whereabouts on the circle the point is. There is no need to calculate the Euclidean distance for each point. Try finding $r_{max}$ for each point, then select the most appropriate $r_{max}$ instead.
How to merge with smallest Euclidean distance?
Let's simplify the requirements A wanted row from the right dataframe is a row that - has the least amount of unmatched keys - has the minimum euclidean distance ``` def merge_left_and_right(left, right): # save original input from modification left = left.copy() right = right.copy() # numerate rows in the right dataframe right['row_num'] = range(len(right)) # find the best matchings for left dataframe left['row_num'] = [ find_the_best_matching(right, key1=row['key1'], key2=row['key2'], aux1l=row['aux1l'], aux2l=row['aux2l']) for index, row in left.iterrows() ] # merge dataframe by row number right = right.drop(['key1', 'key2'], axis=1) merged = left.merge(right, on='row_num', how='left') merged = merged.drop('row_num', axis=1) return merged def find_the_best_matching(right, key1, key2, aux1l, aux2l): right = right.copy() # keys match only when they aren't NaN and equal ("np.nan != np.nan" is True) right['unmatched_key_count'] = 0 right['unmatched_key_count'] += (right['key1'] != key1).astype(int) right['unmatched_key_count'] += (right['key2'] != key2).astype(int) right['euclidean_distance'] = np.sqrt((right['aux1r'] - aux1l) ** 2 + (right['aux2r'] - aux2l) ** 2) # Sort by unmatched amount, then by distance. The first row will be best return right.sort_values(['unmatched_key_count', 'euclidean_distance']).iloc[0]['row_num'] ``` [](https://i.stack.imgur.com/6tdlw.png)
111366
1
111417
null
0
25
Good night, I am working on a paper comparing Python libraries for machine learning and deep learning. Trying to evaluate Keras and TensorFlow separately, I'm looking for information about TensorFlow methods or functions that can be used to preprocess datasets, such as those included in scikit-learn (sklearn.preprocessing) or the Keras preprocessing layers, but I can't find anything beyond a one hot enconding for labels... Does anyone know if what I am looking for exists? Thank you very much!
Preprocessing in TensorFlow
CC BY-SA 4.0
null
2022-05-27T21:59:17.890
2022-05-30T08:36:27.520
2022-05-27T22:10:12.153
123482
123482
[ "python", "tensorflow", "preprocessing" ]
Tensorflow/Keras are not end-to-end librairies that cover every data science process: they are mainly focused on Machine Learning. The least they can do with input data, is to convert it to tensors. ``` import tensorflow as tf import pandas as pd mydata = pd.read_csv("/path/file.csv") (...preprocessing data steps here...) tf_tensors = tf.convert_to_tensor(mydata) print('tensors= ', tf_tensors) ``` I recommend to use other libraries such as pandas, seaborn or scikit learn to preprocess data. You will find plenty of sources to preprocess data efficiently with those libraries, for instance: [https://www.analyticsvidhya.com/blog/2020/09/pandas-speed-up-preprocessing/](https://www.analyticsvidhya.com/blog/2020/09/pandas-speed-up-preprocessing/)
tensorflow in production
I have an answer now for my question. I will share briefly the main steps / technologies I used to deploy the model in production. I am using Python programming language. After training and generating valid models I wrote a restful api using Python programming language and flask. Using flask you can write a restful api. Three important points: 1- It is very important to give attention to where you will define the model architecture/initialize the parameters/ define the session. Avoid doing this each time you call the restful api. this will be very expensive. 2- Flask provide a very good mechanism to run servers in production environment. Read about flask + wcgi Avoid runing the server code (the resful api) directly, in this case you will not have direct and full control. 3- Watch the memory and the cpu usage, make sure to limit the maximum number of instances that can run in parallel. Remember these models can take a lot from the memory. unfortunately, I can not share codes with public. But hopefully my answer can give an idea about how to do it in production.
111383
1
111384
null
1
159
I would like to find how many occurrences of a specific value count a column contains. For example, based on the data frame below, I want to find how many values in the ID column are repeated twice ``` | ID | | -------- | | 000001 | | 000001 | | 000002 | | 000002 | | 000002 | | 000003 | | 000003 | ``` The output should look something like this ``` Number of ID's repeated twice: 2 The ID's that are repeated twice are: | ID | | -------- | | 000001 | | 000003 | ``` Any help would be appreciated.
How to return the number of values that has a specific count
CC BY-SA 4.0
null
2022-05-28T18:38:45.583
2022-05-28T19:08:17.410
null
null
136294
[ "python", "pandas", "data-analysis" ]
You can use `df['var'].value_counts()` to get this info. Example: ``` import pandas as pd x = pd.Series(['000001', '000001', '000002', '000002', '000002', '000003', '000003']) vc = x.value_counts() vc.index[vc == 2] # Index(['000003', '000001'], dtype='object') ``` Beware though of potential conversion of the original data into strings for the series index though. (If that is a problem, using something like `df.groupby('x',as_index=False).size()` may be a better option.)
How to count occurrences of values within specific range by row
You can apply a function to each row of the DataFrame with [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html) method. In the applied function, you can first transform the row into a boolean array using [between](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.between.html) method or with standard relational operators, and then count the `True` values of the boolean array with [sum](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.sum.html) method. ``` import pandas as pd df = pd.DataFrame({ 'id0': [1.71, 1.72, 1.72, 1.23, 1.71], 'id1': [6.99, 6.78, 6.01, 8.78, 6.43], 'id2': [3.11, 3.11, 4.99, 0.11, 2.88]}) def count_values_in_range(series, range_min, range_max): # "between" returns a boolean Series equivalent to left <= series <= right. # NA values will be treated as False. return series.between(left=range_min, right=range_max).sum() # Alternative approach: # return ((range_min <= series) & (series <= range_max)).sum() range_min, range_max = 1.72, 6.43 df["n_values_in_range"] = df.apply( func=lambda row: count_values_in_range(row, range_min, range_max), axis=1) print(df) ``` Resulting DataFrame: ``` id0 id1 id2 n_values_in_range 0 1.71 6.99 3.11 1 1 1.72 6.78 3.11 2 2 1.72 6.01 4.99 3 3 1.23 8.78 0.11 0 4 1.71 6.43 2.88 2 ```
111390
1
111396
null
2
443
I read a few articles which were stating that we need to add nonlinearity but it wasn't clear why we need nonlinearity and why can't we use linear activation function in hidden layers. kindly keep math light, intuitive answers.
Why can't we use linear activation function in hidden layers?
CC BY-SA 4.0
null
2022-05-29T08:42:00.553
2022-05-31T13:17:55.880
2022-05-31T13:17:55.880
100269
132421
[ "machine-learning", "deep-learning", "neural-network" ]
If linearities are used in every layer, effectively we end up with a linear (regression) model which is restricted in what it can represent (only linear relationships). On the other hand the [universal approximation theorem](https://en.m.wikipedia.org/wiki/Universal_approximation_theorem) for neural networks assumes general (non-linear) functions which can in principle represent anything, including non-linear relationships. So it is not that it is forbidden, it is simply a poor use of the complexity of neural networks to function as linear models since there are already other linear models to do that that are much simpler to train. Thus NNs are used mainly for general non-linear tasks and thus have to include non-linearities in their design (if they are to function as such).
Why is the input to an activation function a linear combination of the input features?
The main reason is that a linear combination of the input followed by a non-linearity stacked on top of eachother is a [universal function approximator](https://en.wikipedia.org/wiki/Universal_approximation_theorem). Which means that no matter how complicated the true underlying function is, a neural network can approximate it to an arbitrarily small error. There's also the efficiency factor since a linear combination of $n$ inputs each having $m$ dimensions can be represented using a single matrix multiplication $h=X \times W$ where $X$ is an $n \times m$ matrix (where each row is an example and each column is a feature of that example) and $W$ is an $ m \times d $ weight matrix. And computers are [VERY efficient at doing matrix multiplications](https://en.wikipedia.org/wiki/Coppersmith%E2%80%93Winograd_algorithm). Thus, the more you build your model to use matrix multiplications the better.
111393
1
111403
null
5
298
We are a group of doctors trying to use linguistic features of "Spacy", especially the part of speech tagging to show relationships between medical concepts like: 'Femoral artery pseudoaneurysm as in ==> "femoral artery" ['Anatomical Location'] --> and "pseudoaneurysm" ['Pathology'] We are new to NLP and spacy, can someone with experience with NLP and Spacy explain if this is a good approach to show these relationships in medical documents? If not what are the other alternative methods? Many thanks!
Spacy custom POS tagging for medical concepts
CC BY-SA 4.0
null
2022-05-29T10:33:29.063
2022-05-29T16:23:11.713
2022-05-29T11:25:34.837
136312
136312
[ "machine-learning", "nlp", "spacy" ]
Based on the example, it looks like you need more than simple POS tagging. Thankfully there is a full subdomain of NLP devoted to biomedical data, and there are many tools available which can help with this kind of task: - In case the data is made of biomedical research papers, you will find a lot of resources related to the Medline and PubMedCentral databases: UMLS and the tool MetaMap PubTator, a recent annotated version of the biomedical literature. SemRep for relations. - cTakes is another annotator system which is more specialized with clinical texts. - SciSpacy is a Spacy variant specialized for biomedical text. It can also annotate medical terms with UMLS labels. The last one in particular seems particularly appropriate in your case. biomedical text presents a lot of specific difficulties which cannot be handled with general domain models. Note that there are probably more tools and resources, this a very active domain. (disclaimer: I recycled a large part of an [older answer](https://datascience.stackexchange.com/a/90124/64377))
How to replace words in a sentence with their POS tag generated with SpaCy efficiently?
You can not replace tags in a sentence because Python strings are immutable. You can make another string with just the tags: ``` >>> import spacy >>> nlp = spacy.load('en_core_web_sm') >>> doc = nlp("Apple is looking at buying U.K. startup for $1 billion") >>> " ".join(token.tag_ for token in doc) 'NNP VBZ VBG IN VBG NNP NN IN $ CD CD' ``` That example is based on the spaCy [documentation](https://spacy.io/usage/spacy-101#annotations-pos-deps). If the tokens are split by several different non-space delimiters (called "multiple infix tokenization" in SpaCy), you would have to track that and the code would be more complex.
111440
1
111767
null
2
47
I have a regression problem I'm trying to build a model for: Predicting sales per person (>= 0) depending on some variables. I'm running different model types and gave deep neural networks a try. The loss functions I'm using are mean squared error and mean absolute error (or sometimes a mix). I often run into this issue though, that despite mse and mae are being optimized, I end up with a very strong bias in the prediction, e.g. `sum(training_all_predictions) / sum(training_all_real) = 0.76`. --- Looking at this from a small example point of view, I can't blame the model: ``` real <- c(10, 30, 100) pred1 <- c(4, 14, 122) pred2 <- c(16, 46, 122) ## mean absolute error mean(abs(pred1 - real)) # 14.66667 mean(abs(pred2 - real)) # 14.66667 ## mean squared error mean((pred1 - real)^2) # 258.6667 mean((pred2 - real)^2) # 258.6667 ``` So from a model loss point of view, these are identical solutions. However, if I were to sum up multiple predictions, I would clearly prefer `pred1`: ``` sum(pred2) / sum(real) # 1.314286 sum(pred1) / sum(real) # 1 ``` So if I take the whole example, pred2 is off by 31%, while pred1 nails it. On a individual level both predictions are equal. All other common regression loss functions I found struggle from the same problem. (Using Keras: [https://keras.io/api/losses/](https://keras.io/api/losses/)) Questions: - Can I solve this with a custom loss functions? I tried (cumsum(y_pred) - cumsum(y_test))^2 but although I got a decline of this loss over epochs, I was even further off (~0.6). - Am I attacking my problem from the wrong angle? I could try to build a model on cohorts, but this just feels very off, as I would have to aggregate information and would introduce cohort size as another variable. Multiplying everything with a factor also sounds off, as this will likely heavily increase mse / mae again. Edit: Specified why pred1 is better than pred2. Edit2: Removed the reference to Estimator bias to avoid confusion. Edit3: Increased the numbers in the example to make it more obvious.
Loss function to prevent estimator bias
CC BY-SA 4.0
null
2022-05-31T06:01:25.243
2022-06-13T08:36:29.393
2022-05-31T21:19:11.550
136199
136199
[ "neural-network", "keras", "r", "regression", "bias" ]
Thank you @Nikos M. for your suggestions. I was about to use your post-applied factor but then gave it another try. And found what caused this. It was that the final layer was using a `softplus` activation function. It sounded like a perfect fit to me for this regression problem, as I only had positive valued outcomes. However this seems to cause some troubles for my DNN, which I don't understand why. Anyway, that's a different topic. Using `relu` in the final layer gave me much better results and also made my initial problem here disappear. So problem is solved. I would say the answer to my question is: If you see something so far off, you shouldn't look for a way to force this. You should debug your model instead.
Optimization of a custom loss function
You don't need to change the way optimizer works. You just need to define your loss function in some standard way. Checkout the answer in this post to have an idea on an example of customizing loss function in Keras. [https://stackoverflow.com/questions/45961428/make-a-custom-loss-function-in-keras](https://stackoverflow.com/questions/45961428/make-a-custom-loss-function-in-keras)
111456
1
111487
null
0
125
I trying to classify this data set ([https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset](https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset)) to classify if a patient is at risk for having a stroke. As the title says, whatever test I run to classify the patients, I keep running into the final results having too many false-positives or too many false-negative results. The data itself is severely imbalanced (95% 0s to 5% 1 (had a stroke)) and in spite of doing various things to try and balance it or compensate for it, I keep running into the same ends. For the record, yes, I have tried SMOTEing the training data set with no success. Furthermore, I've read a few articles against SMOTEing the test data set due to data leakage (e.g. [https://machinelearningmastery.com/data-leakage-machine-learning/](https://machinelearningmastery.com/data-leakage-machine-learning/) and [https://imbalanced-learn.org/stable/common_pitfalls.html#data-leakage](https://imbalanced-learn.org/stable/common_pitfalls.html#data-leakage)). Here are the codes I've been using. I'm using Python 3.10: ``` X = stroke_red.drop('stroke', axis=1) # Removes the "stroke" column. Y = stroke_red.stroke # We're storing the dependent variable here. ####### Pipelining ####### from sklearn.impute import SimpleImputer from sklearn.pipeline import Pipeline from sklearn.preprocessing import OneHotEncoder, StandardScaler from sklearn.compose import ColumnTransformer cat_pipe = Pipeline( steps=[ ("impute", SimpleImputer(strategy="most_frequent")), ("oh-encode", OneHotEncoder(handle_unknown='ignore', sparse=False)) ] ) num_pipe = Pipeline( steps=[ ("impute", SimpleImputer(strategy="mean")), ("scale",StandardScaler()) ] ) cont_cols = X.select_dtypes(include="number").columns cat_cols = X.select_dtypes(exclude="number").columns process = ColumnTransformer( transformers=[ ("numeric", num_pipe, cont_cols), ("categorical", cat_pipe, cat_cols) ] ) ####### Splitting the data into train/test ####### from sklearn.model_selection import train_test_split, cross_validate, GridSearchCV, StratifiedKFold #preprocessing. X_process = process.fit_transform(X) Y_process = SimpleImputer(strategy="most_frequent").fit_transform( Y.values.reshape(-1,1) ) X_train, X_test, Y_train, Y_test = train_test_split(X_process, Y_process, test_size=0.3, random_state=1111) # Splits data into train/test sections. Random_state = seed. ``` ``` from imblearn.over_sampling import SMOTENC from imblearn.pipeline import Pipeline from sklearn.linear_model import LogisticRegression sm = SMOTENC(categorical_features=[0,2,3], random_state=1111) X_train, Y_train = sm.fit_resample(X_train, Y_train) ``` Finally, the Extreme Gradient Boosting algorithm: ``` import xgboost as xgb boostah = xgb.XGBClassifier(objective='binary:logistic', n_estimators=100000, max_depth=5, learning_rate=0.000001, n_jobs=-1, scale_pos_weight=20 ) # scale_pos_weight is a weight. #0s / #1s . boostah.fit(X_train,Y_train) predict = boostah.predict(X_test) print('Accuracy = ', accuracy_score(predict, Y_test)) print("F1 Score = ", f1_score(Y_test, predict)) print(classification_report(Y_test, predict)) print(confusion_matrix(Y_test, predict)) ``` Here are the confusion matrix results. Bear in mind, I had the SMOTE section commented out when running this: ``` Accuracy = 0.6966731898238747 F1 Score = 0.2078364565587734 precision recall f1-score support 0 0.99 0.69 0.81 1459 1 0.12 0.82 0.21 74 accuracy 0.70 1533 macro avg 0.55 0.76 0.51 1533 weighted avg 0.95 0.70 0.78 1533 [[1007 452] [ 13 61]] ``` Here are the results with SMOTE on: ``` Accuracy = 0.39008480104370513 F1 Score = 0.13506012950971324 precision recall f1-score support 0 1.00 0.36 0.53 1459 1 0.07 0.99 0.14 74 accuracy 0.39 1533 macro avg 0.54 0.67 0.33 1533 weighted avg 0.95 0.39 0.51 1533 [[525 934] [ 1 73]] ``` Any tips on fixing this? If you need my complete code, let me know, and I'll get it to you.
Classification Produces too Many False Positives or False Negatives
CC BY-SA 4.0
null
2022-05-31T19:14:05.520
2022-06-02T06:39:00.300
null
null
136403
[ "machine-learning", "python", "classification" ]
Behind the scenes there is a confidence score associated to most models. You can retrieve them using `model_name.predict_prob` instead of `model_name.predict`. By default predict uses a .5 confidence score, i.e. anything above a .5 confidence score is predicted to be in the positive class. All you have to do is alter that threshold and you can tradeoff performance between the two classes.
Suspiciously low False Positive rate with Naive Bayes Classifier?
- I find the easiest way for people to understand this is to think of the confusion matrix. Accuracy score is just one measure of a confusion matrix, namely all the correct classifications over all the prediction data at large: $$\frac{True Positives + True Negatives}{True Positives + True Negatives + False Positives + False Negatives}$$ Your False Negative Rate is calculated by: $$\frac{False Negatives}{False Negatives + True Positives}$$ One model may turn out to have a worse accuracy, but a better False Negative Rate. For example, your model with worse accuracy may in fact have many False Positives but few False Negatives, leading to a lower False Negative Rate. You need to choose the model which produces the most value for your specific use case. - Why do some classifier perform poorly? While an experience practitioner might surmise what could be a good modeling approach for a dataset, the truth is that for all datasets, there is no free lunch... also known as "The Lack of A Priori Distinctions Between Learning Algorithms" You don't know ahead of time if the best approach will be deep learning, gradient boosting, linear approaches, or any other number of models you could build.
111475
1
111488
null
0
250
can you help me understand this better? I need to detect anomalies so I am trying to fit an lstm model using validation_data but the losses does not converge. Do they really need to converge? Does the validation data should resemble train or test data or inbetween? Also, which value should be lower, loss or val_loss ? Thankyou!
How to fit a model on validation_data?
CC BY-SA 4.0
null
2022-06-01T15:29:29.910
2022-06-02T07:01:19.570
null
null
135764
[ "keras", "regression", "lstm", "anomaly-detection" ]
When validating machine learning models, you have to use a validation procedure that is consistent with your problem. For an anomaly detection use-case, it means correctly split your data, evaluate your model and with the right metrics. ## Split of the data You have to correctly choose the way you are splitting your data. By default, you have to define three different sets : training, validation and the test sets. The train-validation-test split is the most appropriate if the observations are well independent and the notion of time is not important in your problem. It is the best one because the distribution of your training data should be similar to your validation and test datasets. Exemple 1 : To detect anomalies in banking transactions, the observations are independent and time is not important. Train-validation-test split seems to be an appropriate choice. Exemple 2 : To detect anomalous temperatures in time series, time is an important variable because it might be possible to learn these anomalous temperatures from future data, which would then introduce a look-forward bias. In that situation, refer to sklearn [TimeSeriesSplit](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html). If you have few observations, you can also take a look on cross-validation. Because you are using LSTM models which are designed for time series modeling, I guess you might be in the second configuration. ## Which loss to minimize ? You have every time to minimize the validation loss function. The correct model selection would be such as : - Select a set of models and features to optimize : - For each model : - Train the model on the train set. - Evaluate your model on the validation set. - Select your best model according to your validation set metrics. - Evaluate it one and only one time on the test set. As you want to minimize the loss on the validation set, you don't especially need to converge on the training set. For example, in an overfitting situation, you can obtain a very low loss on the training set but a very high loss on the validation set due to overfitting. Test set metrics is your true compass. ## Which metrics to use ? For an anomaly detection use-case, you have to carefully choose your metrics. For most use-cases, the accuracy metrics will be very bad as the distribution of your labels is imbalanced and the positive labels (the anomalies) normally are more important than the non-anomaly class. You have to select a metrics appropriate with respect to the previous reasons + the problem you are searching to solve.
Validation during fitting in Keras
If you don't define the validation set/ validation split for your model, then it has no way to check for it's performance because you have not provided anything to the model on which it can validate its performance. In this case, the model will run through training examples, will learn a way to minimize the cost function as per your code and that's it. You get a model which has learned a hypothesis from your data but how good the model is, it can only be checked by making predictions for the test set.
111557
1
111562
null
2
22
I am trying to create a regression model that predicts the box office success of a movie, with one of the explanatory variables being the actors who appear in the film. My problem is that I decided to do the first 4 billed actors, but in the model, it is taking it as 4 separate variables (Actor 1, Actor 2, Actor 3, Actor 4). For example, Jack Nicholson is the lead in "as good as it gets" so he would be Actor 1, but in "a few good men", he would be Actor 2, so the model doesn't recognize them as the same value for calculations. I want the model to treat Actor 1 the same as Actor 4 for the inputs so that the order the actors are assigned does not impact the output. So (Tom Cruise, Brad Pitt) would be treated the same as (Brad Pitt, Tom Cruise). Is there a model/method that I could use to solve this problem? If my problem isn't clear I can clarify any further questions.
Combine multiple duplicate categorical variables into a single one for multiple linear regression
CC BY-SA 4.0
null
2022-06-04T08:45:14.860
2022-07-14T15:33:16.293
null
null
136541
[ "machine-learning", "regression", "machine-learning-model" ]
The issue is just that you consider the list of actors as ordered, but if they are considered as an (unordered) set it works perfectly. The regular "bag of words" representation used in text can perfectly handle this, considering all the different actors as the distinct "words", i.e. the vocabulary. The principle is simple: every actor is assigned an index $i$, for example by sorting the actors alphabetically. Every movie (instance) has a set of actors (can be any number) represented as an array of boolean values, where the index $i$ is 1 if and only if actor $i$ is in the movie.
Multiple Features with the same categorical value
The increase in dimensionality due to 'No Internet Service' could be handled maybe with a 2 layer model. If 'No Internet Service' is true, you run the first level model with a handful of other variables (excluding those which depend on internet service availability). The second layer model runs only when internet service available, with all the variables.
111572
1
111643
null
2
113
I am working on a project which involves developing a machine learning/deep learning for an application in a [roll-to-roll industry](https://www.montalvo.com/article-library/roll-to-roll-processing-basics/). For a long time, I have been looking for similar problems as a way to get some guidance but I was never able to find anything related. Basically the problem can be seen as follows: - An industrial machine is producing a roll of some material, which tends to have visible defects throughout the roll. I have already available a machine learning algorithm capable of analyzing segments of the roll and classifying each segment as having defects or not, so the task it not detect the defects. - What I am actually developing is an algorithm that receives time-series inputs of the production, including the outputs (probabilities) of the machine learning vision model that classify the segments as having defects or not, and evaluates if the machine should stop or not at a specific instant, to avoid further generation of defects. - In many roll-to-roll = continuous production industries, unlike the industries where very 'isolated' parts are produced with very specific reject/don't reject quality criteria (e.g: car parts), you might not want to stop production at the sight of a single defect, but rather when groups of continuous defects start to ruin the production. So the problem is more about detecting those continuous defects by analyzing each timestep of information and be able to 'separate' those from the cases of just single defects. Hope that the description provides a little context in order to understand the purpose here. I am using an approach based on LSTMs and a sigmoid activation function. I am developing a custom loss function and modeling the learning problem labels based on regions of timesteps in which the machine should stop - it gives a classification at each timestep. Something like: ``` [0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] - the zeros represent timesteps where no stop should happen - the ones represent timesteps where at least a stop should happen = continuous defects ``` The NN should learn to not stop on the places with zeros and stop on the places with ones, by being fed different timestep inputs. There are some particularities of course but I believe this is a simple explanation that I hope can provide some insights. -> With this, I was curious to know if someone has ever worked on a problem that follows a similar 'logic' and direct me to similar ways of looking at the problem. Would also be interested in similar network architectures/configurations that would lead to a starting point. I am also very curious on any other contribution as a way to look at the problem. Really interested in hearing your perspectives!
Neural network / machine learning approach to model specific sequencing-classification problem in industry
CC BY-SA 4.0
null
2022-06-04T17:27:25.433
2022-06-08T12:59:52.797
2022-06-07T08:39:49.817
133013
133013
[ "machine-learning", "deep-learning", "classification", "lstm" ]
I would try to apply techniques from the "changing point problem" world. In this kind of problem, you try to identify times when the probability distribution of a stochastic process or time series changes. This is a classic problem with classic solutions, so maybe you don't need a neural network to solve it. In your particular case, you're interested in the online version, this is, you have to detect the change in the distribution in real-time. I leave here some sources I've found that may be interesting - Change detection in streaming data analytics: A comparison of Bayesian online and martingale approaches - Bayesian online change point detection — An intuitive understanding - Online change detection techniques in time series: An overview --- If you ask me, I would try a Bayesian approach, where you have a prior distribution on the defects rate and you update the distribution with incoming data. You could model the probability of receiving a defective segment as a beta distribution $B(x; \alpha, \beta)$, which has the nice property that parameters $\alpha$ and $\beta$ are updated as $$ \alpha_{t+1} = \alpha_t + s_t $$ $$ \beta_{t+1} = \beta_t + f_t $$ where $s_t$ is the number of segments without any defect at time $t$ and $\beta_t$ is the number of segments with defect at time $t$. Therefore, after observing $T$ segments you would have $$ P = B(x; \alpha_T, \beta_T) $$ And with this distribution, you can implement strategies like "Stop the production if the defect rate is higher than some threshold $t$ with a probability $>95$, ie: stop if $\int_{t}^1 B(x; \alpha_T, \beta_T) dx > 0.95$"
Which machine learning model should I learn for this problem?
You should use a [Markov reward model](https://en.wikipedia.org/wiki/Markov_reward_model) to model your problem. All the possible words are the different states of your chain. The replacement process corresponds to the transitions of your Markov chain. After defining all the properties of your chain (states, transitions, rewards, ...), you can train your model and get the best strategy for each current word.
111574
1
111575
null
2
149
Do the below codes do the same? If not, what are the differences? ``` fs = RFE(estimator=RandomForestClassifier(), n_features_to_select=10) fs.fit(X, y) print(fs.support_) ``` ``` fs = SelectFromModel(RandomForestClassifier(), max_features=10) fs.fit(X, y) print(fs.support_) ``` ``` fs= RandomForestClassifier(), fs.fit(X, y) print(fs.feature_importances_[:10,]) ```
What are the differences between the below feature selection methods?
CC BY-SA 4.0
null
2022-06-04T20:05:51.293
2022-06-05T15:41:58.937
2022-06-05T15:40:27.327
55122
136558
[ "machine-learning", "scikit-learn", "feature-selection" ]
They are not the same. As the name suggests, "recursive feature elimination" (`RFE`) recursively eliminates features, by fitting the model and throwing away the least-important one(s). After removing one feature, the next iteration may find the remaining features have changed order of importance. This is especially true in the presence of correlated features: they may split importance when included together, so might both be dropped by your second approach; but in RFE, one gets dropped at some point, but then the other one appears more important in the following iterations (since it no longer splits its importance with its now-dropped companion) and so is kept. Your third approach doesn't do any feature selection; it just prints the first (not top) feature importances (according to the model fitted on all features).
Select the best feature selection method for classification
I don't think that there is a single feature selection method that works best with a specific algorithm, what they do is selecting the best features based on various criteria. These features can be useful or not to the algorithm that does the classification, regardless what this algorithm is. Without knowing anything about your data or their distribution, you can simply try a lot of those methods to see which produces the best results, and see if these generalize with the test set. Also, SVM itself can be used for feature selection, since it finds the optimal coefficient for each feature. I don't know if you can access those coefficients through Weka (sorry, not familiar with the software), but if you could they can be an indicator of how important each feature is.
111591
1
111592
null
0
24
Some sources consider a test/train split, such as with sklearn, to be expected practice, and validation is more or less reserved for k-fold validation. However, Keras has a somewhat different approach with its validation_split parameter. Different sources report different things on the subject, some suggesting that this replaces test/train splitting, and it seems it should obviously not be confused with k-fold cross-validation. Can anyone confirm or clarify what is generally expected among keras users on the subject?
Is it good practice for Keras/TensorFlow users to rely on the validation set for testing?
CC BY-SA 4.0
null
2022-06-06T00:20:13.333
2022-06-06T00:52:40.743
null
null
136586
[ "machine-learning", "keras", "model-evaluations" ]
After some additional digging I came across this [issue](https://github.com/keras-team/keras/issues/1753) at the Keras source repository which seems to outline the usage and some of the confusion surrounding the nomenclature of Keras' validation set. According to this, it appears it is correct to say that the validation set is equivalent to a test set, and the naming reflects how it is used to help assess the training process itself during training.
Validation during fitting in Keras
If you don't define the validation set/ validation split for your model, then it has no way to check for it's performance because you have not provided anything to the model on which it can validate its performance. In this case, the model will run through training examples, will learn a way to minimize the cost function as per your code and that's it. You get a model which has learned a hypothesis from your data but how good the model is, it can only be checked by making predictions for the test set.
111620
1
111645
null
0
42
In order to achieve scalable and robust time series forecast models, I am currently experimenting with metalearner ensembles. Note, that I am also using a global modeling approach, so all time series are "learning" from another. In my example I want to predict the monthly demand for 12 retail products one year ahead (4 years of training data available) I also choose different datasets to test the following. As base models, I am fitting 6 XGBoost Models with 6 different learning rates from 0.001 to 0.65. The other parameters I do not tune (parsnip library defaults). For the metalearner I am also using an XGBoost Model with the following tuning grid: [](https://i.stack.imgur.com/coznb.png) Note, that I choose a very small range for the eta parameter! The grid search resulted in mtry = 16, min_n = 13, tree_depth = 7 and a learning rate of 0.262. Trees were set to 1000 and early stopping parameter was set to 50 iterations. However, my results are too good to be true (realistic) I guess. Below you can see the typical accuracy results from my predictions: [](https://i.stack.imgur.com/7pMj4.png) As you can see, the predicted line of the ex post (training forecast on out-of-sample test set) almost perfectly matches with the actual values. Also there is an almost perfect bias, variance tradeoff. This seems odd, because it should be really hard to achieve in real world problems. Now I am asking myself, if this approach is just exactly what I need, or if this
Why do I get an almost perfect fit as well as bias variance tradeoff with my time series forecast?
CC BY-SA 4.0
null
2022-06-07T14:15:20.960
2022-06-08T10:13:51.520
2022-06-07T15:06:20.713
126496
126496
[ "machine-learning", "xgboost", "accuracy", "ensemble-learning", "meta-learning" ]
Without more details, it seems to me that you have a data leak problem. How did you split the data in train/test? Notice that you're dealing with a time-series problem, so the standard random split wouldn't work. If you do a random split you would have a data leak, ie: your model would be trained with data from the future. In time-series problems, you need to do a split based on time, for example, use all the data from the first 3 years to train the model and then evaluate the model with the remaining data of the last year.
The Bias-Variance Trade-Off
### One way to look at this is through the idea of under-/overfitting First off, here is a sketch of the generally observed relationship between bias and variance, in the context of model size/comlpexity: [](https://i.stack.imgur.com/JIYOI.png) Say you have a model which is learning quite well, but your test accuracy seems to be pretty low: 80%. The model is essentially not doing a great job of mapping input features to outputs. We have a high bias. But for a wide variety of input (assuming a good test set), we consistently obtain this 20% error; we have a low variance. We are underfitting Now we decide to use a bigger model (e.g. a deep neural network), which is able to capture more details of the feature space and so maps inputs to outputs more accurately. We now have an improved test accuracy: 95%. At the same time, we notice that several runs of the model produce different results; sometime we have 4% error, and sometimes 6%. We have introduced a higher amount of variance. We are perhaps somewhere around the optimum model complexity shown on the graph above. You say ok... let's create a monolithic neural network. It totally nails training and ends with a perfect accuracy: 100%. However, the test accuracy now drops to 90%! So we have zero bias, but a large variance. We are overfitting. The model is almost as good as a look-up table for training data, but doesn't generalise at all when it sees new samples. Intuitively, that 10% error corresponds to a difference in distribution between the training and test sets used $\rightarrow$ the model knows the training distribution in extreme detail, some of which do not apply to the test set (i.e. the reality). In summary: The bias tends to decrease faster than the variance increases, because you can likely still make a more competitive model for your dataset; the model is underfitting. It is like the low-hanging fruit that you can easily get - so an incremental improvement on the red curve above gives a big decrease in bias (increase in performance). Obviously that pattern cannot go on indefinitely, with each increment in model complexity, you get a lower increase in performance; i.e. you have diminishing returns. Furthermore, as you begin to overfit, the model becomes less able to generalise and so exhibits larger errors on unseen data; variance is creeping in. --- For some more intuition between bias/variance in machine learning, I'd recommend [this talk by Andrew Ng](https://www.youtube.com/watch?v=F1ka6a13S9I). There is also [a text summary](https://github.com/thomasj02/DeepLearningProjectWorkflow) of the talk, for a quicker overview. For a brief but more mathematical explaination, head over to [this post of Cross-Validated](https://stats.stackexchange.com/questions/336433/bias-variance-tradeoff-math?rq=1). The second answer there is very recent and is perhaps better than the (old) accepted answer.
111631
1
111632
null
0
849
When training my model and reviewing the confusion matrix, there are completely zero columns for each specific category, what does this mean, is there an error or how do I interpret it? I use the confusion matrix display function and it gives this result [](https://i.stack.imgur.com/Jheaw.png) Thanks for your answers
What mean a column in zero in confusion matrix?
CC BY-SA 4.0
null
2022-06-08T03:54:52.170
2022-06-10T01:19:01.367
2022-06-10T01:19:01.367
113067
136673
[ "machine-learning", "scikit-learn", "confusion-matrix" ]
If the plot is correct, it means the model never gives any predictions label of 1 and 2.
Confusion regarding confusion matrix
> Question 1: Is my understanding and construction of the confusion matrix correct? Yes, you are correct in your definitions and the way you construct the confusion matrix. The links you have provided also agree with each other. They just switch rows and columns, since there is no hard rule regarding the presentation, as long as the correct relations are maintained. Link 1 shows this matrix: ``` | Pos Class | Neg Class Pos Pred | TP | FP Neg Pred | FN | TN ``` Link 2 shows the same matrix, but transposed: ``` | Pos Pred | Neg Pred Pos Class | TP | FN Neg Class | FP | TN ``` > Question 2: What is the intuitive difference between Precision and recall? Precision is the rate at which you are correct when you predict a positive class. It takes into account all of your positive predictions and figures out which proportion of those is actually correct. When your precision is high, this means that once you make a positive prediction, you are likely to be correct about it. This says nothing about how correct your negative predictions are -- you might make 1 positive and 99 negative predictions on 100 actual positives and still get 100% precision, since your only positive prediction just happened to be correct. Recall is the rate at which you are able to predict the positive class correctly. It takes into account all of the actual positive classes and figures out which proportion of those you have predicted correctly. When your recall is high, this means that very few actual positives slip by your model without being detected as such. This says nothing about how good you are at being actually correct with your positive predictions -- a model that always predicts a positive class easily achieves 100% recall. One usually strives to optimize both precision and recall by finding the most acceptable balance between the two. You might want to read this [article about the Precision-Recall curve](https://towardsdatascience.com/on-roc-and-precision-recall-curves-c23e9b63820c) to get a fuller understanding of the relationship between these metrics. > What happens if precision < recall? As you have highlighted in your post, the two formulas differ only in the denominator. It follows that when precision is smaller than recall, then the number of false positives in your predictions is larger than the number of false negatives.
111676
1
111677
null
0
213
I have a dataframe with X rows. For each row, I have the information of the month (value from 1 to 12) and hour (value from 1 to 24) in separate columns I need to create a heatmap with seaborn in order to display the number of entries crossed with the month/hour. I do not manage to do it. Any idea how should I proceed ?
Seaborn Heatmap with month & hour of database entry
CC BY-SA 4.0
null
2022-06-09T12:45:16.273
2022-06-09T13:06:34.537
null
null
136739
[ "python", "dataframe", "seaborn", "heatmap" ]
``` import numpy as np import pandas as pd from numpy.random import RandomState import seaborn as sns state = RandomState(0) df = pd.DataFrame({"month":state.randint(1,12,20), "hour":state.randint(1,24,20) }) sns.heatmap(pd.crosstab(df["month"], df["hour"]), cmap ="Reds",linewidths=1); ``` [](https://i.stack.imgur.com/Ed1Vf.png)
seaborn heatmap not displaying correctly
Current version of matplotlib broke heatmaps. Downgrade the package to 3.1.0 `pip install matplotlib==3.1.0` [matplotlib/seaborn: first and last row cut in half of heatmap plot](https://stackoverflow.com/q/56942670/9214357)
111717
1
111724
null
3
40
As machine learning (in its various forms) grows ever more ubiquitous in the sciences, it becomes important to establish logical and systematic ways to interpret machine learning results. While modern ML techniques have shown themselves to be capable of competing with or even exceeding the accuracy of more "classical" techniques, the numerical result obtained in any data analysis is only half of the story. There are established and well formalized (mathematically) ways to evaluate uncertainties in results obtained via classical methods. How are uncertainties evaluated in a machine learning result? For example, it is (at least notionally) relatively straight forward to estimate uncertainties for fit parameters in something like a classical regression analysis. I can make some measurements, fit them to some equation, estimate some physical parameter, and e.g. estimate its uncertainty with rules following from the Gaussian error approximation. How might one determine the uncertainty in the same parameter as estimated by some machine learning algorithm? I recognize that this likely differs with the specifics of the problem at hand and the algorithm used. Unfortunately, a simple Google search turns up mostly "hand-wavy" explanations, and I can't seem to turn up a sufficiently understandable scientific paper discussing an ML result with an in-depth discussion of uncertainty estimation.
How is uncertainty evaluated for results obtained via machine learning techniques?
CC BY-SA 4.0
null
2022-06-10T22:14:47.943
2022-06-11T23:02:58.483
null
null
103900
[ "machine-learning", "regression", "uncertainty" ]
Here are some of the main approaches I'm aware of. One method is to use Bayesian machine learning, which learns a probability distribution over the entire parameter space (see [Joris Baan's A Comprehensive Introduction to Bayesian Deep Learning](https://jorisbaan.nl/2021/03/02/introduction-to-bayesian-deep-learning.html)). However, these methods tend to be computationally expensive. For classification problems, the most common approach is to use a classifier that can output class probabilities (such as the cross-entropy loss). While this probability can be interpreted as uncertainty, it usually is not well calibrated. By calibrated we mean the model uncertainty reflects the prediction results. For example we would expect that 80% of samples that are predicted with >= 80% certainty are correctly classified. So a calibration step can be added after the classification step. For regression problems, a naïve approach is to train multiple models, either by bagging, or for deep learning models, using different weight initialisations. Then the variance of the predictions from each model can be interpreted as the uncertainty. For instance, we can use an ensemble, where we also use the mean of the predictions as the ensemble prediction. However, this gives over-confident estimations of uncertainty. For deep learning models, there are a couple of other approaches that I am aware of. The first is called Monte-Carlo drop-out ([Gal and Ghahramani's Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning](http://proceedings.mlr.press/v48/gal16.html?ref=https://githubhelp.com)), which can be applied to any deep learning model that uses dropout. This method uses the randomness from dropout to estimate variance or uncertainty in predictions and can be applied to both regression and classification models. The next is to change the loss function to the negative log likelihood (NLL) function. When used for regression, it provides an estimate of both the mean and variance. So models using this method have two outputs - one for the mean and the other for the variance. An early work on this is [Nix and Weigend's Estimating the mean and variance of the target probability distribution](https://ieeexplore.ieee.org/document/374138), which uses separate MLPs for the mean and variance. A more recent work ([Lakshminarayanan et al.'s Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles](http://arxiv.org/abs/1612.01474)) applies this technique to any neural network, and combines it with ensembling, which further improves the uncertainty estimates.
Measuring the uncertainty of predictions
Alternatively to the accepted answer, another way to estimate the uncertainty of a specific prediction is to combine the probabilities returned by the model for each class using a certain function. This is a common practice in "Active learning", where given a trained model you select a subset of unlabelled instances to label (to augment the initial training dataset) based on some sort of uncertainty estimation. The three most common functions used (called sampling strategies [1] in the literature) are: - Shannon entropy: you simply apply Shannon entropy to the probabilities returned by the model for each class. The highest the entropy the highest the uncertainty. - Least confident: you simply look at the highest probability returned by the model among all classes. Intuitively the certainty level is lower for a test instance with a 'low' highest probability (e.g. [.6, .35, .05] --> .6) compared to a 'high' highest probability (eg. [.9, .05, .05] --> .9). - Margin Sampling: you subtract form the highest probability the second-highest probability (e.g. [.6, .35, .05] --> .6-.35=.25). It is conceptually similar to the least confident strategy, but a bit more reliable since you're looking at the distance between two probabilities rather than a single raw value. Also, in this case, a small difference means a high uncertainty level. Another more interesting way to estimate the uncertainty level for a test instance that is applicable to deep models with dropout layers is instead deep active learning [2]. Basically, by leaving dropout active while doing predictions you can bootstrap a set of different outcomes (in terms of probabilities for each class) from which you can estimate mean and variance. The variance, in this case, tells you how much the model is uncertain about that instance. Anyway, consider that these are just crude approximations, using a model that specifically estimates the uncertainty of a particular prediction as suggested in the accepted answer is surely the best option. Nevertheless, these estimations can be useful because they are potentially applicable to every model that returns probabilities (and there are also adaptations for models like SVM). [1] [http://www.robotics.stanford.edu/~stong/papers/tong_thesis.pdf](http://www.robotics.stanford.edu/%7Estong/papers/tong_thesis.pdf) [2] [https://arxiv.org/abs/1808.05697](https://arxiv.org/abs/1808.05697)
111720
1
111734
null
0
453
Losely following [this](https://deeplizard.com/learn/video/FNqp4ZY0wDY) tutorial, I'm trying to apply Keras' ImageDataGenerator preprocessing on my custom object dataset. Here is the code: ``` import numpy as np import tensorflow as tf from tensorflow import keras from tensorflow.keras.layers import Dense, Activation from tensorflow.keras.optimizers import Adam from tensorflow.keras.metrics import categorical_crossentropy from tensorflow.keras.preprocessing.image import ImageDataGenerator from tensorflow.keras.preprocessing import image from tensorflow.keras.models import Model from tensorflow.keras.applications import imagenet_utils from sklearn.metrics import confusion_matrix import itertools import os import shutil import random import matplotlib.pyplot as plt %matplotlib inline os.chdir('/home/pc3/deep_object/') mobile = tf.keras.applications.mobilenet.MobileNet() cwd = os.getcwd() # Print the current working directory print("Current working directory to generate: {0}".format(cwd)) train_path = 'data/Object-samples/train' valid_path = 'data/Object-samples/valid' test_path = 'data/Object-samples/test' train_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory( directory=r'data/Object-samples/train', target_size=(224,224), batch_size=10) valid_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory( directory=valid_path, target_size=(224,224), batch_size=10) test_batches = ImageDataGenerator(preprocessing_function=tf.keras.applications.mobilenet.preprocess_input).flow_from_directory( directory=test_path, target_size=(224,224), batch_size=10, shuffle=False) ``` However I get 0 pictures, despite the fact that the folders are already filled with pictures. ``` Current working directory to generate: /home/pc3/deep_object Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. Found 0 images belonging to 0 classes. ``` my directory structure is like this: ``` ~/deep_object$ tree -L 2 . ├── data │   ├── Object-samples │   ├── dogs-vs-cats │   └── MobileNet-samples ├── deeplizard_tutorial_side_effects_example.ipynb ├── Mobilenet-finetunning-my-dataset.ipynb ``` So I'm wondering what is wrong here?
Found 0 images belonging to 0 classes
CC BY-SA 4.0
null
2022-06-11T01:57:10.447
2022-06-12T04:28:55.350
null
null
136808
[ "keras", "preprocessing" ]
Tensorflow expects sub-directories for every single class that you have inside the primary directory. For example: inside `/Object-samples/` you'd have two sub-directories `/Object-samples/0/` and `/Object-samples/1/` which would contain images belonging to that class.
Different number of images in classes
Frankly, even 50 images will not be sufficient if you are going to create and use a CNN model. If you think you want more images for you model training, then go for data augmentation. It is a process of transforming an image by a small amount (be it height, width, rotation etc or any combination of these). In this way, an image and its augmented image will differ slightly. You can find relevant article here- [https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced](https://medium.com/nanonets/how-to-use-deep-learning-when-you-have-limited-data-part-2-data-augmentation-c26971dc8ced) To answer the part that should there be same number of images in each class, there should be approximately same number. This problem is a general problem while working on classification task and there are several ways to deal with it, including simulating the data (augmentation). I would suggest that first create a separate test set, then on the remaining train set, use data augmentation and finally create the model. EDIT Using a pretrained convnet is also an option, as stated in a deep learning book- > A common and highly effective approach to deep learning on small image datasets is to use a pretrained network. A pretrained network is a saved network that was previously trained on a large dataset, typically on a large-scale image-classification task. If this original dataset is large enough and general enough, then the spatial hierarchy of features learned by the pretrained network can effectively act as a generic model of the visual world, and hence its features can prove useful for many different computer vision problems, even though these new problems may involve completely different classes than those of the original task. For instance, you might train a network on ImageNet (where classes are mostly animals and everyday objects) and then repurpose this trained network for something as remote as identifying furniture items in images.
111735
1
111766
null
1
100
I extracted event triples from sentences using OpenIE. Can I concatenate the components in the event triple to make it a sentence and use Sentence-Bert to embed it? It seems no one has done this way before so I am questioning my idea. I'm using news headlines to predict next day's stock movement. For example, there are two news headlines, the first is "U.S. stock index futures points to higher start", I used openIE to extract it and there are two event triples, [('U.S. stock index futures', 'points to', 'start'), ('U.S. stock index futures', 'points to', 'higher start')]. (There are repetition in the openIE extracted event triples and I don't know how to avoid it.) Since it contains events I'm interested in (stock index), I will embed these two events and take their mean as the the embedding. The second headline is "STOCKS NEWS US- Economic and earnings diary for Jan 4", it contains no events as it is only contain nouns. So I will embed it as 0 vector in this case.
Can I use Sentence-Bert to embed event triples?
CC BY-SA 4.0
null
2022-06-12T07:26:13.623
2022-06-13T07:30:59.753
2022-06-13T02:54:44.790
130605
130605
[ "nlp", "bert", "information-retrieval", "information-extraction" ]
Using triples could lead to wrong results because some headlines could contain double negations or other complex structures that are difficult to classify with triples. However, you can apply directly on the headlines Bert sentiment analysis instead, which can process complex semantics correctly. Here is an example using [Bert's twitter roberta sentiment analysis](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest?text=U.S.%20stock%20index%20futures%20points%20to%20higher%20start): [](https://i.stack.imgur.com/Sv6gS.png) Note: in this specific case neutral and positive have almost the same value, and you will want to set some threshold to consider a headline as positive, like positive > 0.4. It could also require some fine tuning because tweets are a bit different from headlines. You can even apply sentiment analysis levels (very negative, negative, neutral, positive, very positive) to get even better predictions.
BERT embedding layer
### Why are positional embeddings learned? This [was asked in the repo of the original implementation](https://github.com/google-research/bert/issues/58) without an answer. It didn't get an answer either in the [HuggingFace Transformers repo](https://github.com/huggingface/transformers/issues/5384) and in [cross-validated](https://stats.stackexchange.com/q/460161/40048), also without answer, or without much evidence. Given that in the [original Transformer paper](https://papers.nips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) the sinusoidal embedding were the default ones, I understand that during preliminary hyperparameter tuning, the authors of BERT decided to go with learned embeddings, deciding not to duplicate all experiments with both types of embeddings. We can, nevertheless, see some comparisons between learned and sinusoidal positional embedding in the ICLR'21 article [On Position Embeddings in BERT](https://openreview.net/forum?id=onxoVA9FxMw), where the authors observe that: > The fully-learnable absolute PE performs better in classification, while relative PEs perform better in span prediction. ### How does the model handle multiple sentence segments? This is best understood with the figure of the original BERT paper: [](https://i.stack.imgur.com/XPWhh.png) The two sentences are encoded into three sequences of the same length: - Sequence of subword tokens: the sentence tokens are concatenated into a single sequence, separating them with a [SEP] token. This sequence is embedded with the subword token embedding table; you can see the tokens here. - Sequence of positional embedding: sequentially increasing positions form the initial position of the [CLS] token to the position of the second [SEP] token. This sequence is embedded with the positional embedding table, which has 512 elements. - Sequence of segment embeddings: as many EA tokens as the token length of the first sentence (with [CLS] and [SEP]) followed by as many EB tokens as the token length of the second sentence (with the [SEP]). This sequence is embedded with the segment embedding table, with has 2 elements. After embedding the three sequences with their respective embedding tables, we have 3 vector sequences, which are added together and used as input to the self-attention layers. ### Are the weights in these embedding layers adjusted when fine-tuning the model to a downstream task? Yes, they are. Normally, all parameters are fine-tuned when fine-tunine a BERT-based model. Nevertheless, it is also possible to simply use BERT's representations as input to a classification model, without fine-tuning BERT at all. In [this article](https://www.aclweb.org/anthology/W19-4302.pdf) you can see how these two approaches compare. In general, for BERT, you obtain better results by fine-tuning the whole model.
111740
1
111758
null
1
270
I'm solving a classification task on a time-series dataset. I use a Transformer encoder with learned positional encoding in the form of a matrix of shape $\mathbb{R}^{seq \times embedding}$. Naturally, this leads to the fact that the sequence length that the model can process becomes fixed. I had an idea to do learned positional encoding with LSTM. I.e., we project a sequence of tokens with a linear layer onto an embedding dimension, then feed the embeddings to LSTM layer and then add hidden states to the embedding. $x = MLP(x)$ $x = x + LSTM(x)$ Do you think this will have the right effect? Are there any things to consider?
LSTM as learned positional encoding for vor variable sequence length input
CC BY-SA 4.0
null
2022-06-12T12:25:39.983
2022-06-12T21:36:59.047
null
null
125836
[ "machine-learning", "time-series", "lstm", "transformer", "sequence" ]
At first sight, it should have the right effect. However, [LSTM has limits](https://datascience.stackexchange.com/questions/27392/so-whats-the-catch-with-lstm) and it cannot process any kind of timeseries with any size properly. For instance, if the inputs are too small (sequence length <20), classic RNN might be even better than LSTM (sequence length ~250-500), depending also on the variability of your data. That's why, the different inputs shall be comparable to have a good prediction, and data scaling could be therefore necessary. I suggest you to study the [LSTM paper](https://www.researchgate.net/publication/13853244_Long_Short-term_Memory) to have a good understanding about its limits and how the data is processed. Otherwise, I would need more information to be more specific about potential solutions. Note: time series classification can also be done with [dimensional reduction algorithms](https://datascience.stackexchange.com/questions/106987/can-t-sne-be-applied-to-visualize-time-series-datasets).
Input sequence ordering for LSTM network
In order to make this decision, you have to think about what you want the representation to be passed to the next layer (or network output) to represent. If you want the representation at (after) $t = 0$ to be passed, you should pass the arrays in the $X = [x_{-n} .. x_0]$ order. The LSTM cell will form a representation of the sequence, then, at $t = 0$. In the reverse case, the representation will indicate the state at (before) $t = -n$ That's not to say that this is always a simple question for LSTM design- frequently in some domains such as NLP (text modeling), a bidirectional LSTM is used. This, basically, means both representations are implemented and used.
111747
1
112227
null
3
165
Warning: I understand that my question may seem strange, stupid, and impossible, but let's just think about this interesting problem. I would not ask a question like: how to create an AGI in google colab. This is a real problem and I believe that it is possible to solve it. My question may seem strange, because I have little experience and maybe I have indicated something wrong. But I assure you, this is not complete nonsense. My actual task is much harder then task bellow, therefore to simplify question i have simplified problem I have RL task: My environment is python, agent is usual RL agent(it takes action like others RL agents), but i have no list of actions. Goal is writing the fastest python code for sorting.Policy net(network which returns action) returns me sorting string(something like: "[list1.pop(list1.index(min(list1))) for i in range(len(list1))]"), i execute it through "eval", get time of execution and use this time to form reward. But this task is easier, in my real task i have some variables and functions which model can use when produces sorting-strings. In our case it can be: "list_1", "some_function_which_helps_to_sort_list1_faster". That's how i'm going to get sorting-strings:I know for sure i need code model. When i was looking for it i found [GPT-J](https://github.com/kingoflolz/mesh-transformer-jax/#gpt-j-6b). GPT-J is usual transformer Decoder only model. First of all i create random initial(it's constant) noise. Policy net also produces noise. At the first time this(noise from policy net) is random noise, but over the time model will be trained better and the noise that policy net will produce will already be meaningful and will help to get normal sorting-strings. I add first initial noise to noise which i got from policy net, pass it through GPT-J and finally get sorting string. I gonna train model with many different initial noises, because logically if initial noises are different, model will: 1) be trained better 2)produce new "the fastest" results. Entire approach looks like clip guided diffusion and i'm going to train it with PPO. As you remember, i have some variables that have to be in sorting strings. Therefore, there is a question: "How to make policy net to add these variables into sorting strings?". I believe reward forming will help to solve it. How reward will be formed: If policy net returns valid sorting string(which is valid python code and contains minimal set of variables i need(at least "list1") to pass it through eval without errors) but it is more slower than previous best sorting-string, reward will be tiny(0.1). If policy net returns valid sorting string which is faster than previous best sorting string, reward will be huge(1). If policy net returns invalid sorting string(which is not valid python code or doesn't contain minimal set of variables), reward will be negative(-1). Thats how i'm going to train model. Bellow is how i'm going to use model at the inference time: First of all set initial noise. Then make the same like in training loop, but don't save weights(weights will be updated according PPO, all steps, which were in "That's how i gonna get sorting-strings" will be executed, but when i get result from final iteration, i won't save this new weights which i get in inference time and if i need to surpass previous the best result, i will run inference loop with new initial noise till i surpass this result.) What does here result from final iteration mean?: That's exactly like in clip guided diffusion. I set some variable n_steps. For example it will be equal to 1000. Here i make 1000 calls to policy net, 1000 times update policy weights(if it's training time, at the inference time i also update weights but keep them in RAM memory and don't save)... And when i get final result at 1000th iteration, that means for me result from final iteration. Question: Is my approach of implementing this problem right? How would you implement my problem? If you have some helpful tips for me(maybe you have some links which will help me, may be i wrong form reward...; here i meant anything which might be helpful for me), don't hesitate to share it with me.
Manipulating noise to get some data in right format and apply it to task using PPO
CC BY-SA 4.0
null
2022-06-12T18:21:19.080
2022-07-01T13:01:58.217
2022-06-12T18:34:54.293
123438
123438
[ "machine-learning", "python", "nlp", "reinforcement-learning", "transformer" ]
In terms of process optimization, RL is an excellent option but the environment definition and its policy could be difficult to implement. That's why a genetic algorithm is a good alternative as it explores thousands of possibilities without having to define a complex environment or policy, overall if the environment is a conceptual one. I don't know your process, but you can set all its potential sub-functions and assign them numeric weights (with any range) and the genetic algorithm would explore thousands of possibilities by modifying each weight randomly. The result might not be so good as RL, but much better than a human thanks to the raw compute power. PyGAD is a python library for Genetic Algorithm that can be applied to many cases: [https://pygad.readthedocs.io/en/latest/](https://pygad.readthedocs.io/en/latest/) [https://blog.paperspace.com/genetic-algorithm-applications-using-pygad/](https://blog.paperspace.com/genetic-algorithm-applications-using-pygad/) Otherwise, there is a code to implement GA from scratch: [https://machinelearningmastery.com/simple-genetic-algorithm-from-scratch-in-python/](https://machinelearningmastery.com/simple-genetic-algorithm-from-scratch-in-python/)
regression with noisy target vairable
It depends how much noise: - If it's only a little noise, say for instance 2% of the target values are off by a small value, then you can safely ignore it since the regression method will rely on the most frequent patterns anyway. - If it's a lot of noise, like 50% of the target values are totally random, then unless you can detect and remove the noisy instances you can forget it: the dataset is useless. In general ML algorithms are based on statistical principles, to some extent their job is to avoid the noise and focus on the regular patterns. But there are two things to pay attention to: - Is the noise truly random, or does it introduce some biases in the data? The latter is a much more serious issue. - Noisy data is even more likely to cause overfitting, so extra precaution should be taken against it: depending on the data, it might be necessary to reduce the number of features and/or the complexity of the model.
111763
1
111769
null
0
217
I have created a string array and called a method with that array along with Train, test data. The purpose of the method is to find Kfold results of each algorithm specifies in the array. Everything works fine except in cross_val_score(model,X,y), it considers the model variable as a simple string instead of a callable model. If I put it like this cross_val_score(RandomForestClassifier(),X,y) it works perfectly fine. Now, how to convert the string model to a callable model. I am new to ML and I may not be able to make you understand the problem properly. Please let me know if you have any question. Thank you. ``` strarray = ['RandomForestClassifier()','LogisticRegression()','SVC()'] def checkall(array,X,y,Kfold): for model in strarray: values = cross_val_score(model,X,y) print(values) checkall(strarray,X,y,5) ```
How to convert a string to callable
CC BY-SA 4.0
null
2022-06-13T06:33:20.583
2022-06-13T08:50:15.293
null
null
136871
[ "machine-learning", "machine-learning-model", "cross-validation" ]
You can simply change the values stored in the `strarray` variable to simply store either the model name or the initialized model: ``` strarray = [RandomForestClassifier(), LogisticRegression(), SVC()] def checkall(array,X,y,Kfold): for model in strarray: values = cross_val_score(model,X,y) print(values) checkall(strarray,X,y,5) ```
Converting String Data to Numeric data
Since you tagged `scikit-learn` , then you can use its function `preprocessing.LabelEncoder()` to convert categories to numerical values. And yes, this is a good practice. ``` from sklearn import preprocessing label_encoder = preprocessing.LabelEncoder() label_encoder.fit(my_dataframe["status"]) ```
111798
1
111812
null
0
66
I need some feedback on a problem I have been working on. I am working with a fairly balanced dataset with all categorical features, and a categorical outcome (classification problem). The data has no continuous numerical features. To predict my outcome on testset I am using xgboost algorithm. Since I have all categorical predictors I am using one-hot encoding to handle my categorical features. Now I am a bit worried that I might be missing something in the process, so I wanted to check if I have all categorical features with a binary outcome is this a valid approach? I don't see any other way to deal with this problem. FYI the categorical variables are not things like ZIP codes, IDs...they are actually relevant to the outcome...e.g. smoker (yes/no) | high bp (yes/no) What do you think?
All Categorical data
CC BY-SA 4.0
null
2022-06-13T23:29:42.890
2022-07-22T16:25:36.240
2022-07-22T16:25:36.240
83275
136893
[ "classification", "python-3.x", "one-hot-encoding", "one-class-classification" ]
I don't see any problem doing classification with purely categorical features, as far as the features are relevant. And as always, some precautions dealing with categorical features: - The choice of model. Some models can handle categorical features off-the-shelf (e.g. tree-based algorithm), some are specifically designed for (e.g. CatBoost). These models may ease your feature engineering work, and probably better accuracy. - Cardinality. Sometimes a categorical feature can take a lot of values (plus unknown/unseen ones), which can be a problem. You should think ahead about what to do in these cases.
How to deal with multiple categorical data set
Simply yes. Before that you may want to check how correlated those features are, so you can simply deselect redundant features, but in general you are right. Starting with one-hot encoding is a good choice. What may need more inspection later, is the number of different regions. Then you come up with many sparse features for which you need to reduce dimensionality. If you provide more information on the whole project I can provide more insight.
111821
1
111829
null
0
996
I am joining on two data sets on a column which has duplicated values in both datasets. Is it better practice to remove the duplicates and make the values I am joining on a primary key in both datasets before joining the two, or is it okay to first merge the two data sets, then make the joined column the primary key using something like `.groupby()`? E.g: ``` A = pd.DataFrame({'KEY' : ['abc', 'abc', '123', 'wyz'], 'WEIGHT' : [5, 7, 13, 10] }) B = pd.DataFrame({'KEY': ['abc', '123', '123', 'def'], 'TITLE' : ['cat', 'dog', 'dog', 'elephant'] }) # join first then clean C = pd.merge(A,B, how='inner', on='KEY') C = C.groupby('KEY', as_index=False).agg(funcs) # mean for VALUE, first for TITLE # versus clean then joining A = A.groupby('KEY', as_index=False).mean() B = B.groupby('KEY', as_index=False).first() C = pd.merge(A,B, how='inner', on='KEY') ``` ```
Joining on columns with duplicate values - clean before merging or after merging?
CC BY-SA 4.0
null
2022-06-14T17:10:18.037
2022-06-14T21:16:34.197
null
null
127671
[ "dataset", "data-cleaning" ]
With small datasets it doesn't matter, but for large datasets it is always better to remove duplicates before joining, just for efficiency. There is usually an increase in CPU time when you are joining larger datasets with duplicates. This is magnified for very large datasets. But, in an opposite sense, sometimes joining without first removing the duplicates also helps with identifying join problems if the resulting output does NOT contain EXACT duplicate. e.g Sometimes a row contains a column you may not be interested in, which is revealed AFTER you do the join and can thus generate additional rows. I have discovered hidden variables in some of my data which i didn't realize changed by seeing duplicates in the output. That can help with refining your join by including (or eliminating) the column, and can help your model. in practice we usually join 1 or 2 keys. So it is always a good idea to do a count of primary keys in the input and output data to make sure you are getting what you want.
Cleaning data with two fields mixed in the same column?
I suggest adding a company name for each corresponding month. See the attached picture. The formula for the first column determined if it is for a month or for the company name. Assuming that all your months are in the three-letter format and there is no company named 'May' or 'Sep', the formula for the cell B2 would be ``` =SUMPRODUCT(--(A2={"Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"}))>0 ``` The formula for C2 is ``` =INDEX($A$2:$A$13,MATCH(2,1 /($B$2:B2=FALSE))+1) ``` Please refer to this page ([https://www.get-digital-help.com/index-match-last-value/](https://www.get-digital-help.com/index-match-last-value/)) for explanations for this formula. Finally, you can filter the third column in my example to keep months for the specific company only. They will be in the order needed. [](https://i.stack.imgur.com/DmwyC.jpg)
111853
1
111874
null
1
96
I am doing deep learning binary classification on some data and got very weird results with the accuracy metric. In the first few epochs, it doesn't change at all but then it goes on this weird linear path. I have attached a picture below. Can someone tell me what this means since I am new to machine learning and I am used to nice logarithmic graphs? [](https://i.stack.imgur.com/JxF8e.png)
Having weird accuracy graph on deep learning binary classification model
CC BY-SA 4.0
null
2022-06-15T13:34:28.700
2022-06-16T08:03:53.530
null
null
119228
[ "deep-learning", "classification", "keras", "data-analysis" ]
Assuming that the dataset is balanced, my intuition is the following: From epoch 1 to 55: the loss function being superhigh indicates your model is doing random predictions but with probabilities near 0 or 1. This is, to each example it assigns randomly a value near 0 or 1. The log-loss formula is $$ \mathcal{L(y_i, p_i)} = - \left[y_i \log p_i + (1-y_i) \log(1-p_i) \right] $$ If the real label is $y_i=0$ and your wrong prediction is $p_i \to 1$, then the loss function is $\mathcal{L}(y_i, p_i) \to \infty$. Also, the random predictions explain the 50/50 accuracy (assuming a balanced dataset). During these epochs, your model is not learning, but just calibrating the predicted probabilities to be in a more reasonable range. From epoch 55 to 300: After epoch 55 seems that your model starts to learn. This is also reflected in your accuracy plot, where the accuracy starts to improve. In your loss plot, it seems the loss is not changing, but this may be an illusion. Try to change the y-range to `(0, 1)` and I guess you'll see your loss decreasing. My recommendation is to be careful in the way you initialize your network since they have a lot of impact on your network learning process. There are a lot of resources about this topic, like this [one](https://machinelearningmastery.com/weight-initialization-for-deep-learning-neural-networks/).
Using deep-learning on graph data for binary classification
Feeding arbitrary graphs as inputs to any current general purpose ML algorithm, unless your problem and graphs are very specific (e.g., all graphs on a handful of nodes, or the size of your training set is of the same scale as the number of possible graphs of that size, or there is some very simple dependency - e.g. output is determined by the presence of some particular edge, etc) seems a rather pointless approach. You can encode all NP-complete problems or e.g. the halting problem by a graph or a directed graph and few other inputs, and a 0/1 label. One of very few successful ML algorithms that applied neural networks for a combinatorial problem was AlphaGo/AlphaZero, but it relied heavily on some specific properties of the game, the possibility to generate infinite amount of training data via self-play and enormous resources. What you did so far (trying to construct features based on your graphs and your particular problem) makes much more sense in practice, I would explore this path further. There is also some recent literature that tries to assign graph nodes vectors of numbers, or "node embeddings", but this might work better for a specific type of graphs (sparse networks, where some additional data is available per node).
111860
1
111861
null
2
314
Disclaimer: I am almost a complete novice when it comes to tensorflow, keras, coding in general, and neural networks/data science. While reading papers on novel architectures for neural nets, I see diagrams and such describing their ideas, and then they present their results, with no code shown. While learning to apply neural nets, I just load in the data, build a model by stacking layers like LSTM, Dense, etc, and training it. In other words I can't see how to do anything that isn't straight out of the box. What tools, libraries, etc, do researchers use to implement these architectures when the layers aren't as simple as `model.add(Dense(1))` For example, how might we implement the SeqMO algorithm described in this paper? [https://arxiv.org/pdf/1806.05357.pdf](https://arxiv.org/pdf/1806.05357.pdf)
How do researchers actually code novel architectures and layers?
CC BY-SA 4.0
null
2022-06-15T18:35:59.000
2022-06-15T19:10:40.957
null
null
136959
[ "deep-learning", "neural-network", "time-series", "research" ]
In this particular case, I don't know how are they implementing these complex layers, but in Keras/TensorFlow you can define your own layers by inheriting from `tf.keras.layers.Layer`. For example, you could define a custom dense layer as (example from the [documentation](https://www.tensorflow.org/tutorials/customization/custom_layers)) ``` class MyDenseLayer(tf.keras.layers.Layer): def __init__(self, num_outputs): super(MyDenseLayer, self).__init__() self.num_outputs = num_outputs def build(self, input_shape): self.kernel = self.add_weight("kernel", shape=[int(input_shape[-1]), self.num_outputs]) def call(self, inputs): return tf.matmul(inputs, self.kernel) ``` If you're interested in this topic I recommend you [paperswithcode.com](https://paperswithcode.com/), where you can find a lot of code implementations of research papers. Hope it helps :) --- EDIT: Apparently, for your particular example the code is available [here](https://github.com/igfox/multi-output-glucose-forecasting), but they're using PyTorch, not Keras.
Flow of machine learning model including code
The purpose of a machine learning model is to make predictions on real-world data that isn’t known at model training time. As such, it’s best practice to always do a train-test split at the very beginning of any project, and only use the training data for training the model. The test data should not be used at all until your model is fully trained. To add to this, when tuning the model’s hyperparameters there is an additional subset of the training data used for validation, which is not used for training but for evaluating performance during training. You create train-test-splits of your input data, run through all of your models, and use your aggregate cross-validation score to choose one or two models to concentrate on improving. Based on your results, it looks like logistic regression is getting the highest score, and is probably a good fit for this type of problem – predicting whether an instance of the data is a member of the target or not (“stroke” or “not stroke”). Once this is done, you can tune your model’s hyperparameters (using GridSearch like you’re doing for example) to determine the best parameters for things like regularization (the “C” parameter). Then, and only then, when you have selected your model, tuned the hyperparameters, and trained on your training data only, then you evaluate performance on your test data. For the evaluation, it’s good to understand the performance of your model and what that represents, that’s what your metrics at the end are for. Precision is percentage of true positives over true positives and false positives, and recall is true positives over true positives plus false negatives. F1 score is the harmonic mean of these two values, ROC is the performance of the model at different classification thresholds. If the purpose of the model is to predict strokes, do you want a higher precision which would mean you detect more potential strokes at the risk of higher false positives? Or a higher recall which would mean all the instances classified as high risk of stroke are more likely to be high risk of stroke but at the cost of potentially missing some? Hth,
111891
1
111901
null
1
682
I loaded this BertTokenizer previously, but now it is showing, I have to make sure I don't have a local directory. In my kaggle kernel, I don't have this local directory. How to solve it? ``` class Config: DEVICE = "cuda" if torch.cuda.is_available() else "cpu" LR = 2e-5 TRAIN_BATCH_SIZE = 16 TEST_BATCH_SIZE = 8 EPOCHS = 10 N_FOLD = 5 TOKENIZER = BertTokenizer.from_pretrained('bert-large-uncased', do_lower_case=True) CLASSES = 3 MAX_LEN = 200 TRAIN_CSV = ".csv" TEST_CSV = "test.csv" API = "#" PROJECT_NAME = "bert-base2" MODEL_NAME = "bert-large-uncased" ``` ``` OSError: Can't load tokenizer for 'bert-large-uncased'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'bert-large-uncased' is the correct path to a directory containing all relevant files for a BertTokenizer tokenizer. ```
BertTokenizer Loading Problem
CC BY-SA 4.0
null
2022-06-16T17:53:57.833
2022-06-17T07:37:16.613
null
null
135448
[ "deep-learning", "nlp", "pytorch", "bert" ]
It could be due to an internet connection issue, that's why it is always safer to download your model in a local folder first and then load it directly using the absolute path. In addition to that bert large is about 2Gb. To download it, you can use this code: ``` git lfs install git clone https://huggingface.co/bert-large-uncased ``` See also: [https://huggingface.co/bert-large-uncased](https://huggingface.co/bert-large-uncased)
Error to load a pre-trained BERT model
The number of classes is something you have to define yourself depending on the problem you're working with. In the blogpost you've linked you see that they refer to a variable called `schema`, which is defined in in the previous blogpost to the one you've linked as follows: `schema = ['_'] + sorted({tag for sentence in samples for _, tag in sentence})`. This also refers to a variable called `samples`, which is defined as `samples = train_samples + val_samples`. Combining these pieces of code the correct preprocessing pipeline would be as follows: ``` def load_data(filename: str): with open(filename, 'r') as file: lines = [line[:-1].split() for line in file] samples, start = [], 0 for end, parts in enumerate(lines): if not parts: sample = [(token, tag.split('-')[-1]) for token, tag in lines[start:end]] samples.append(sample) start = end + 1 if start < end: samples.append(lines[start:end]) return samples train_samples = load_data('data/01_raw/bag.conll') val_samples = load_data('data/01_raw/bgh.conll') samples = train_samples + val_samples schema = ['_'] + sorted({tag for sentence in samples for _, tag in sentence}) ```
111920
1
111923
null
0
26
So, I have to start thinking about the topic of my final project in a data science master's degree (business oriented, although I can choose any unrelated field) and one of the requirements is to mine and use data that has not yet been analysed in the academic research environment. I would prefer to avoid the typical scrape of data from twitter or other common scraping sources of information. I would really appreciate if you could give me some ideas or direction on how to find an accesible source of data which also does not require too much time to get information from. Thanks a lot for the help!
Data Mining of unresearched data for a master's degree final project
CC BY-SA 4.0
null
2022-06-17T21:21:44.593
2022-06-18T08:03:04.013
null
null
137054
[ "dataset", "data-mining", "data", "research", "scraping" ]
If it's business oriented, there are many "business Wikipedia" type websites that have lots of data presented in the same format on each page, which will make them a lot simpler to scrape. For example Yahoo Finance for stock data [finance.yahoo.com](https://finance.yahoo.com) You can use the BeautifulSoup library executed with a Python script locally to set-up a simple HTML page scraping script given the URL, and then just get it to loop through some set of different page URLs to get all the info you need.
How to handle missing data for machine learning
There are three main approaches to handling missing data. - Impute - use some method to fill in the missing values with reasonable guesses. You could interpolate between two time points, take the average value over all time points, or use a variety of other techniques leveraging co-occurrence of other variables to get a reasonable estimate. - Ignore - some methods can just ignore missing data, and not use it in the model at all - Utilize - for cases where data is not missing-at-random, missingness itself can be an informative feature. You could include missing values as another data point to model your output.
111959
1
111960
null
1
138
In Neural Networks and Deep Learning, the gradient descent algorithm is described as going on the opposite direction of the gradient. [Link to place in book](https://chimu.sh/d/pdf/xTpdFOFNeSpMKgTUnBK32?page=27&pageOffset=610&hid=GSrwUQEgxK4FjhpZv6cWx). What prevents this strategy from landing in a local minimum?
How does gradient descent avoid local minimums?
CC BY-SA 4.0
null
2022-06-19T18:23:27.713
2022-06-20T06:08:24.140
null
null
137122
[ "neural-network", "gradient-descent" ]
It does not. Gradient descent is not immune to local minima in non-convex function optimization. Nevertheless, the noise introduced by stochastic gradient descent (SGD) helps escaping local minima. Other hyperparameters, like the learning rate, momentum, etc, also help. You can check sections 4.1 and 4.2 of [Stochastic Gradient Learning in Neural Networks](http://leon.bottou.org/publications/pdf/nimes-1991.pdf) for detailed explanations and mathematical formulation of SGD and its convergence properties.
How to get out of local minimums on stochastic gradient descent?
## Stohastic gradient descent loss landscape vs. gradient descent loss landscape > I don't know how using the training data in batches rather than all at once allows it to steer around local minimum in the example, which is clearly steeper than the path to the global minimum behind it. So, stochastic gradient descent is more able to avoid local minimum because the landscape of batch loss function is different than the loss function of whole dataset (the case when you calculate the losses on all data and then update parameters). That means the gradient on the whole dataset could be 0 at some point, but at that same point, the gradient of the batch could be different (so we hope to go in other direction than the local minimum). ## Neural network architecture and loss landscape In order to escape the local minimum, your neural architecture can also help. For example, see this work: [Visualizing the Loss Landscape of Neural Nets](https://arxiv.org/abs/1712.09913). It shows that skip connections can smoothen your loss landscape and, hence, help the optimizers to find the global minimum more easily. ## Local minimums vs global optimum Finally, there are some works suggesting that the local minimums have almost the same function value as the global optimum. See [this question](https://stats.stackexchange.com/questions/203288/understanding-almost-all-local-minimum-have-very-similar-function-value-to-the) and answer.
111974
1
111977
null
2
94
I want to know if the following is a valid approach to create labels, if I have measurements under some conditions, and the conditions are similar but never exactly the same. This doesn't correspond exactly to my real problem but for convenience lets say I have two WiFi Networks A and B. I want to know under which conditions A or B performs better. My first step is to transmit Data over A and over B. I measure the network conditions and the time it takes to transmit the Data. The problem is that the captured conditions are never exactly the same. Hence I can't directly assign a label (e.g. "under this conditions A or B is better"). So I would perform a k-means clustering of the conditions and group similar conditions together. For each point in a cluster I lookup weather the transmission was performed with A or B and compare e.g. the medians of the transmission times. Now I have a label (A better or B better) for each cluster center and can train a supervised model to generalize. Is this a valid or common approach in those situations?
Using k-means to create labels for supervised learning
CC BY-SA 4.0
null
2022-06-20T09:51:09.960
2022-06-20T11:56:47.127
null
null
137147
[ "clustering", "unsupervised-learning", "k-means", "supervised-learning", "labels" ]
I'd suggest an alternative approach: train a regression model for each of the two networks A and B, which takes the conditions as input features and predicts the performance of the network under these conditions. Based on these two models, it is possible to directly find out which one is better under any conditions, by applying the two models and comparing their predicted performance. I think that this approach is more direct in representing how the information is likely to impact the results. The clustering approach might work but it would lose some information in the process, because the clustering will introduce errors and the impact of the conditions on the performance wouldn't be directly represented in the model.
supervised learning and labels
The main difference between supervised and unsupervised learning is the following: In supervised learning you have a set of labelled data, meaning that you have the values of the inputs and the outputs. What you try to achieve with machine learning is to find the true relationship between them, what we usually call the model in math. There are many different algorithms in machine learning that allow you to obtain a model of the data. The objective that you seek, and how you can use machine learning, is to predict the output given a new input, once you know the model. In unsupervised learning you don't have the data labelled. You can say that you have the inputs but not the outputs. And the objective is to find some kind of pattern in your data. You can find groups or clusters that you think that belong to the same group or output. Here you also have to obtain a model. And again, the objective you seek is to be able to predict the output given a new input. Finally, going back to your question, if you don't have labels you can not use supervised learning, you have to use unsupervised learning.
111993
1
112010
null
0
188
I'm doing CS 231n on my own. I'm looking at [this solution](https://github.com/amanchadha/stanford-cs231n-assignments-2020/blob/master/assignment1/cs231n/classifiers/linear_svm.py) to a question that implements a SVM. Relevant code: ``` # average the loss loss /= num_train # average the gradients dW /= num_train # add L2 regularization to the loss loss += reg * np.sum(W * W) # ???? dW += 2 * reg * W ``` I don't understand why we would add regularization loss to the gradient. My understanding of regularization is we use it to prefer certain weights, $W$, over others. But... I don't understand - What type of regularization is occurring to dW (L2 regularization operates on the square of all values of the weights -- this is not squaring anything) - Why we would tweak the weights themselves, presumably you want to tweak the loss which will incentivize changing the weights in a certain direction. Why would you tweak the weights (well, their gradients) themselves?
Why would we add regularization loss to the gradient itself in an SVM?
CC BY-SA 4.0
null
2022-06-20T21:27:14.580
2022-06-22T03:35:53.897
null
null
137122
[ "machine-learning", "svm", "gradient-descent", "regularization", "gradient" ]
The l2 regularization term is being added to the loss itself. But then you need to find the gradient of this new loss; since gradients are additive, this is the same as the gradient of the unpenalized loss plus the gradient of the l2 term, the latter of which is the quantity specified in the last line of code. Note that it makes sense: when updating the weights, you will subtract some multiple of the gradient, so are moving the weights opposite their current location, i.e. toward the origin, as you expect regularization to accomplish.
Connection between Regularization and Gradient Descent
The fitting procedure is the one that actually finds the coefficients of the model. The regularization term is used to indirectly find the coefficients by penalizing big coefficients during the fitting procedure. A simple (albeit somewhat biased/naive) example might help illustrate this difference between regularization and gradient descent: ``` X, y <- read input data for different values of lambda L for each fold of cross-validation using X,y,L theta <- minimize (RSS + regularization using L) via MLE/GD score <- calculate performance of model using theta on the validation set if average score across folds for L is better than the current best average score L_best <- L ``` As you can see, the fitting procedure (MLE or GD in our case) finds the best coefficients given the specific value of lambda. As a side note, I would look at this answer [here](https://stats.stackexchange.com/questions/137481/how-bad-is-hyperparameter-tuning-outside-cross-validation) about tuning the regularization parameter, because it tends a little bit murky in terms of bias.
111995
1
112023
null
1
44
I have an ice cream sales simulator for which I can simulate ice cream sales on any given day in the past. I want to optimize daily profit. The dependent variables for my ice cream shop which I have control over are: 'scoop size' and 'number of flavours'. Now, for every day I ran my simulator with all possible combinations of my input variables, resulting in something like this: |day |scoop size |number of flavours |profit | |---|----------|------------------|------| |1 |1 |1 |100 | |1 |1 |2 |120 | |1 |1 |3 |140 | |1 |2 |1 |90 | |1 |2 |2 |95 | |1 |2 |3 |105 | |2 |1 |1 |102 | |2 |1 |2 |85 | |... |... |... |... | So for every day, I created a simulation for all possible scoop sizes and number of flavours. Apart from this, other factors might also affect my sales, like weather or day of the week, but I have no control about these. So given the fact that I have this huge dataset with simulations, how do I go about finding the best combination of scoop size and number of flavours to use in the future? What I've tried: - Just ick the combination of the row with the highest profit (140 in this case), but this might just return an outlier, for example a day on which the weather was very good, so all profits were better. I'm looking for the results with the highest average profit. - Group by both variables individually or create plots to visualize them against profit, but then I'm only optimizing for 1 variables at a time, but both are not independent of each other. Should I try to add all variables that could have an impact on my profit first before making any predictions, even though there's maybe an infinite amount? I'm not looking for an exact answer, I'm just wondering what kind of problem I'm trying to solve here, any tips or any resources to read to get a better understanding of this problem would be very welcome. I'm new to data science so I just don't know where to look. Thanks! (I'm using Python btw)
Optimize daily ice cream profit beased on simulation of all combinations input variables
CC BY-SA 4.0
null
2022-06-21T02:31:34.300
2022-06-21T22:03:44.120
2022-06-21T02:43:06.713
137174
137174
[ "python", "time-series", "optimization", "simulation" ]
I would propose a solution like this: - Train a regression model which predicts the sales (target variable) based on all the features (both types: those you have control on and those you don't). - Assuming that the model works well, it can predicts sales given any conditions. For example, let's say you want to optimize for tomorrow: - For the uncontrollable parameters, input the current values, for example day of the week tomorrow and weather forecast - For the controllable parameters, try all the possible combinations (repeat the process of applying the model). Then just pick the combination of controllable parameters which leads to highest sales volume.
Algorithm for finding best juice combinations
As described, you have no data describing individual people (such as age, sex, shoe size), but are searching for an optimum value of the mix for the whole population. So what you want is a mix with the maximum expected rating, if you chose a random person to rate it from the population. In principle, this expected rating is a function taking two parameters e.g. $f(n_{apple}, n_{orange})$ - the amount of the third juice type is a not a free choice, so you only have two dimensions. You can break down your problem into two distinct parts: - Taking samples from your population in order to find approximation to the function $f(n_{apple}, n_{orange})$ - Using the approximation as it evolves to guide the search for an optimum value. For a simple approach, you could ignore the second bullet point and just randomly sample different mixes throughout the event. Then train a regression ML on the ratings (any algorithm would do, although you'll probably want something nonlinear, otherwise you'll just predict one of the pure juices as favourite) - finally graph its predictions and find the maximum rating at the end. This would probably be fine when pitched as a fun experiment. However, there is a more sophisticated approach that is well-studied and used to make decisions when you want to optimise an expected value of an action whilst exploring options - it is usually called [multi-armed bandit](https://en.wikipedia.org/wiki/Multi-armed_bandit). In your case, you would need variants of it that consider an "arm space" or parametric choice, as opposed to a finite number of choices that represent selecting between actions. This is important to you, since splitting your mix parameters up into e.g. in 5% steps, will give you too many options to explore given the number of samples you need to make. Instead, you will need to make an assumption that the expected rating function is relatively smooth - the expected rating for 35% Apple, 10% Orange, 55% Grape is correlated with the rating for 37% Apple, 9% Orange, 54% Grape . . . that seems at least reasonable to me, but you should make clear in any write-up that this is an assumption and/or find something published that supports it. If you make this assumption, you can then use a function approximator such as a neural network, a program like xgboost or maybe some Guassian kernels to predict expected rating from mix percentages. In brief for a multi-armed bandit problem, you will use data collected as your experiment progresses to estimate the expected value for each choice, and on each step will make a new choice of mix. The choice itself will be guided by your current best approximation. However, you don't always sample the current top-rated value, you need to explore other mixes in order to refine your estimated function. You have choices here too - you could use $\epsilon$-greedy where e.g. 10% of the time you choose completely randomly to get other sample points. However, you might need something more sophisticated that explores more to start with and still converges quickly, such as [Gibbs sampling](https://en.wikipedia.org/wiki/Gibbs_sampling). One thing you don't say is at what level you are pitching this experiment. Studying the multi-armed bandit problem by yourself referring to blogs, tutorials and papers could be a bit too much work if this is for school science fair. If this all seems a bit too vague and a lot of work to study, then you can probably stick with a simple regression model from the data of a random experiment. I suggest whichever approach you take, that you run some simulations of input data and see whether your approach works. Obviously there is a lot of guess work here. But the principle is: - Create a "true" model function - e.g. pick an imaginary favourite mix and make it score higher. Make it a simple and probably quite subtle function - e.g. score 5 for best result, and take away euclidean distance in "juice space" times a small factor (maybe 1.5) from it. - Create a noisy sampler that imitates someone in your experiment giving a rating to a specific mix. Ensure that the mean value from this matches the "true" function. - Try out your sampling and learning strategies, see how well they find the favourite mix. I highly recommend this kind of dry run before putting your system to real use, otherwise you will have no confidence that your ML/approximator is working. --- One more piece of advice about your estimator: You are expecting a large amount of variance in your data, and will not have a lot of samples. So to avoid over-fitting you will want to have a relatively simple ML model. For a neural network for example, you will probably want only one hidden layer with very few neurons in it (e.g. 4 or 5 might be enough). Finding a model sophisticated enough to predict a curve, but simple enough that it doesn't overfit given very noisy target outputs might take a few tries - this is the main reason why I suggest performing trial runs with simulated data.
112009
1
112078
null
1
92
I have been given the following data: - 20 example CSV files, each labeled as belonging to one of six fixed classes, say A, B, C, D, E, F. - Each file has roughly 20000 rows and 10 floating point columns. - Within each file, the values seem pretty noisy, but the relationships between pairs of columns seem pretty linear (but noisy). - I have not been given any domain knowledge related to the content of the files, except A) the files are likely experimental measurements, and B) that the order of the records should not have any effect on the classifier; i.e. classification whould be invariant under permuting rows of files. I have been asked to see if there is a useful way to predict (with, for example, accuracy > 0.8) a class label for a previously unseen file. At first I thought it was going to be a no-brainer, given the total number of records over all the files. But as I got into it, it seemed more an more like really I had only 20 training examples, and really felt as if I were exhausting them pretty quickly and data dredging. It feels difficult. I am wondering if there is a standard aproach in a situation like this. Thanks for any help!
How to predict a class for a file given a small number of files for training?
CC BY-SA 4.0
null
2022-06-21T12:57:47.273
2022-06-27T08:10:55.170
2022-06-23T14:30:11.343
6597
6597
[ "classification", "multiclass-classification" ]
Given that only that most classes have less than four samples, it is not useful to do a train/test split. A train/test split is the most useful way to assess generalization. One option could be to craft rules by hand. Explore the data and manually construct rule-based logic for each of the classes.
What's a classifier capable of predicting a variable number of classes
This is an interesting problem. What is your xtrain? I guess it boils down to a [multi label problem](https://en.wikipedia.org/wiki/Multi-label_classification). In a simple setup, you would train on the presence of a certain label (present or not). So for n tags/labels, you would train n+1 (1 is no label) models. I‘m not really in to multilabel problems, but I guess this is the way to go.
112034
1
112036
null
0
145
I have this table where there are missing values under the value2 column. |Value1 |Value2 | |------|------| |1000 | | |1000 | | |1000 |500 | |1000 |560 | |1000 |560 | What I would like to do is to display the above table but without the empty rows, therefore the table should look like this |Value1 |Value2 | |------|------| |1000 |500 | |1000 |560 | |1000 |560 | Any help would be appreciated.
How do I not display rows that have an empty value when trying to output a dataframe with pandas
CC BY-SA 4.0
null
2022-06-22T08:11:40.630
2022-06-22T08:22:28.960
null
null
136294
[ "python", "pandas", "data-analysis" ]
You can simply filter out those rows using `pandas` indexing: `df[df["Value2"].notna()]`
handling missing data in pandas python
IIUC you can simply use Pandas [Series.interpolate()](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.interpolate.html) method: Data: ``` In [8]: NA = np.nan In [9]: s = pd.Series([NA,7,6,NA,7,8,NA,NA,NA,10,5,NA,NA,5,9,9,12,8,6,NA,NA]) In [10]: s Out[10]: 0 NaN 1 7.0 2 6.0 3 NaN 4 7.0 5 8.0 6 NaN 7 NaN 8 NaN 9 10.0 10 5.0 11 NaN 12 NaN 13 5.0 14 9.0 15 9.0 16 12.0 17 8.0 18 6.0 19 NaN 20 NaN dtype: float64 ``` Solution: ``` In [11]: s.interpolate().bfill() Out[11]: 0 7.0 1 7.0 2 6.0 3 6.5 4 7.0 5 8.0 6 8.5 7 9.0 8 9.5 9 10.0 10 5.0 11 5.0 12 5.0 13 5.0 14 9.0 15 9.0 16 12.0 17 8.0 18 6.0 19 6.0 20 6.0 dtype: float64 ``` if you need rounded integers: ``` In [13]: s.interpolate().round().bfill().astype(int) Out[13]: 0 7 1 7 2 6 3 6 4 7 5 8 6 8 7 9 8 10 9 10 10 5 11 5 12 5 13 5 14 9 15 9 16 12 17 8 18 6 19 6 20 6 dtype: int32 ```
112040
1
112059
null
1
488
I was doing a task using RNN to predict a time series movement. I want to make my results reproducible. So I strictly followed this post: [https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras](https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras) My code are as follows: ``` # Seed value # Apparently you may use different seed values at each stage seed_value= 0 # 1. Set the `PYTHONHASHSEED` environment variable at a fixed value import os os.environ['PYTHONHASHSEED']=str(seed_value) # 2. Set the `python` built-in pseudo-random generator at a fixed value import random random.seed(seed_value) # 3. Set the `numpy` pseudo-random generator at a fixed value import numpy as np np.random.seed(seed_value) tf.compat.v1.set_random_seed(seed_value) tf.random.set_seed(seed_value) # 5. Configure a new global `tensorflow` session # for later versions: session_conf = tf.compat.v1.ConfigProto(intra_op_parallelism_threads=1, inter_op_parallelism_threads=1) sess = tf.compat.v1.Session(graph=tf.compat.v1.get_default_graph(), config=session_conf) tf.compat.v1.keras.backend.set_session(sess) ``` However, every time I ran my codes, I still got a different result, what could the reasons be?
Why can't I reproduce my results in keras using random seed?
CC BY-SA 4.0
null
2022-06-22T12:45:13.283
2022-06-23T07:48:04.040
null
null
130605
[ "deep-learning", "keras", "tensorflow", "time-series", "rnn" ]
Are you using a CPU or a GPU? If you are using a GPU, there is an additional source of randomness. To confirm this point, you can try to use TensorFlow with CPU only, or disable Cuda DNN but the model will be twice longer: ``` THEANO_FLAGS="optimizer_excluding=conv_dnn" python your_file.py THEANO_FLAGS="dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic" python your_file.py ``` Source: [https://github.com/keras-team/keras/issues/2479#issuecomment-213987747](https://github.com/keras-team/keras/issues/2479#issuecomment-213987747)
Issue with predict generator keras
I solved the problem following the advices in the comments of this discussion. I paste here my code: ``` dizionario = dict({'1': 0, '10': 1, '11': 2, '12': 3, '13': 4, '14': 5, '15': 6, '16': 7, '17': 8, '18': 9, '19': 10, '2': 11, '20': 12, '21': 13, '22': 14, '23': 15, '24': 16, '25': 17, '26': 18, '27': 19, '28': 20, '29': 21, '3': 22, '4': 23, '5': 24, '6': 25, '7': 26, '8': 27, '9': 28}) ``` as you can see I created a dictionary to map classes label to index . I used the output of `final_train_generator.class_indices` to create this dictionary. After `predict_generator` I created the csv of prediction with following code: ``` predicted_class_indices = np.argmax(predictions, axis = 1) final_predictions = [] for element in predicted_class_indices: final_predictions.append(list(dizionario.keys())[list(dizionario.values()).index(element)]) df = pd.DataFrame() df['class'] = final_predictions df['imnames'] = imNames df.to_csv('predictions_xception_all_data_bon.csv', sep=',') ``` I paste here also the code that I changed following the discussion link in the comments: ``` test_datagen = ImageDataGenerator(preprocessing_function=preprocess_input) test_generator = test_datagen.flow_from_directory( directory="./TEST/", target_size=(299, 299), color_mode="rgb", batch_size=20, shuffle = False, class_mode = "categorical", ) test_generator.reset() imNames = test_generator.filenames predictions = model_xcpetion.predict_generator(test_generator, steps=len(test_generator), verbose = 1 ) ```
112104
1
112124
null
0
24
Given a sphere which resembles earth, I want to sample points where land would be. I am struggling to find a dataset to sample from, and even to find a dataset which I could use to generate a set of such points. Does anybody an according dataset / a nice workaround?
Sampling from earths landmass
CC BY-SA 4.0
null
2022-06-24T14:49:34.897
2022-06-25T13:19:13.377
2022-06-24T14:53:52.417
137333
137333
[ "python", "dataset", "geospatial" ]
You can use [cartopy](https://scitools.org.uk/cartopy/docs/latest/installing.html) to achieve this easily. > pip install cartopy For instance: ``` import cartopy.io.shapereader as shpreader import shapely.geometry as sgeom from shapely.ops import unary_union from shapely.prepared import prep land_shp_fname = shpreader.natural_earth(resolution='50m', category='physical', name='land') land_geom = unary_union(list(shpreader.Reader(land_shp_fname).geometries())) land = prep(land_geom) def is_land(x, y): return land.contains(sgeom.Point(x, y)) >>> print(is_land(0, 0)) False >>> print(is_land(0, 10)) True ``` Source: [https://stackoverflow.com/questions/47894513/checking-if-a-geocoordinate-point-is-land-or-ocean-with-cartopy](https://stackoverflow.com/questions/47894513/checking-if-a-geocoordinate-point-is-land-or-ocean-with-cartopy)
Is sampling a valid way to reduce complexity?
I would get a sufficiently large random/representative sample and cluster that. To see what is such a sample, you will have to get two such samples and cluster them to get cluster solutions c1 and c2. If the matching clusters of c1 and c2 have the same model parameters, then you probably have representative samples. You can match the clusters by looking at how c1 and c2 assign drawn data to clusters.
112112
1
112391
null
2
446
I'm working with the famous Movielens 1M dataset and implemented some simple recommender algorithms. While computing the hit rate, I found that it's very low $(\approx 0.008)$ but the papers seem to report high scores $(\approx 0.5)$. Hence, I think I'm doing something wrong during the evaluation process. Here's what I am doing: For each movie that the test user hasn't rated, I'm computing a rating using my algorithm. Then I rank these movies and check if the test item occurs in the top 10 list. After going through many GitHub repos, I found that in some implementations (e.g. [SASRec](https://github.com/kang205/SASRec/blob/master/util.py#L110-L114)) they randomly sample 100 unrated (by the test user) items and append the test item to it and then they build the top 10 rank list. Using this approach, my hit rate went up but this almost feels like cheating! So, I want to know if this is a common practice or if I failed to understand [SASRec](https://github.com/kang205/SASRec/blob/master/util.py#L110-L114)'s code.
What is the correct way to compute hit rate in recommender systems?
CC BY-SA 4.0
null
2022-06-24T22:01:25.637
2022-07-04T16:49:44.800
null
null
137346
[ "machine-learning", "recommender-system", "movielens" ]
A common way to evaluate machine learning models is performance on unseen data. Thus, "randomly sample 100 unrated (by the test user) items and append the test item to it and then they build the top 10 rank list" is popular. That process is similar to using [precision@k for search engine result pages](https://en.wikipedia.org/wiki/Evaluation_measures_(information_retrieval)#Precision_at_k).
What is the best model for a recommendation system using implicit ratings?
If you are after a baseline model, which I also strongly recommend, matrix-factorization (MF) approach (aka collaborative models) you mentioned is the most basic one and super fast and easy to implement. Then you can explore others like content-based models or hybrid recommenders, or even the recent ones based on the self-attention mechanism that Media mentioned. Doing this would give you something to compare with during A/B tests, if there is nothing is in place already. Just note that with the traditional MF may run into memory/performance issues if your matrix get very big (well, if your resources are limited!), and for that Deep Matrix Factorization are a better equivalent choice. I have summarized almost a year ago about these three starting recommendation engines, see [this answer](https://datascience.stackexchange.com/questions/63687/recommend-another-product-only-on-purchase-history-of-users-available/63689#63689). I have code snippers/notebooks gathered from references there for practical implementation of these few baseline that might be able to share if needed.
112126
1
112143
null
1
33
I need to classify participants in an NLP study into 3 classes, based on multiple sentences spoken by the participant. I performed a feature extraction on each sentence, and so I am left with a matrix of length (# of sentences spoken x feature vector length for each sentence) for each participant. So, for me, each sample is represented by a matrix of varying length, since some participants spoke more sentences than others. What are some ways for me to reduce the dimensionality of each matrix, and also standardize the length, so I can perform an SVM with each participant as a sample? I am also interested in learning about other methods to classify my samples, if SVMs are not the best fit. Thank you.
What are some methods to reduce a dataframe so I can pass it as one sample to an SVM?
CC BY-SA 4.0
null
2022-06-25T14:17:09.770
2022-06-26T13:36:48.057
null
null
137382
[ "nlp", "svm", "reshape" ]
SVM are not meant to solve "arbitrarily long" classification problem, therefore you have few choices: - use PCA for sequences, however it takes very long since it has to build a giant matrix over which it can perform PCA - change model, and pick a better suited one (eg RNN) - pad and cut your data (most often is not needed the whole phrase to predict the output) - introduce some prior knowledge, for example order the most recurrent words, and remove all of those that don't add anything to the meaning (be careful, this might lead to many problems, for example if you remove "not") - use recurrent autoencoders, to transform a sentence to a fixed size vector, over which you can perform any ML algorithm (but this might cause some problem from the POV of explainability) In my opinion, cut&pad is the best option to start with, simple to implement and often very powerful, but this might change from context to context
Reduce dimension, then apply SVM
I'd recommend spending more time thinking about feature selection and representation for your SVM than worrying about the number of dimensions in your model. Generally speaking, SVM tends to be very robust to uninformative features (e.g., see [Joachims, 1997](https://eldorado.tu-dortmund.de/bitstream/2003/2595/1/report23_ps.pdf), or [Joachims, 1999](https://eldorado.tu-dortmund.de/bitstream/2003/2596/1/report24.pdf) for a nice overview). In my experience, SVM doesn't often benefit as much from spending time on feature selection as do other algorithms, such as Naïve Bayes. The best gains I've seen with SVM tend to come from trying to encode your own expert knowledge about the classification domain in a way that is computationally accessible. Say for example that you're classifying publications on whether they contain information on protein-protein interactions. Something that is lost in the bag of words and tfidf vectorization approaches is the concept of proximity—two protein-related words occurring close to each other in a document are more likely to be found in documents dealing with protein-protein interaction. This can sometimes be achieved using $n$-gram modeling, but there are better alternatives that you'll only be able to use if you think about the characteristics of the types of documents you're trying to identify. If you still want to try doing feature selection, [I'd recommend $\chi^{2}$](http://nlp.stanford.edu/IR-book/html/htmledition/feature-selectionchi2-feature-selection-1.html) (chi-squared) feature selection. To do this, you rank your features with respect to the objective \begin{equation} \chi^{2}(\textbf{D},t,c) = \sum_{e_{t}\in{0,1}}\sum_{e_{c}\in{0,1}}\frac{(N_{e_{t}e_{c}}-E_{e_{t}e_{c}})^{2}}{E_{e_{t}}e_{c}}, \end{equation} where $N$ is the observed frequency of a term in $\textbf{D}$, $E$ is its expected frequency, and $t$ and $c$ denote term and class, respectively. You can [easily compute this in sklearn](http://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.chi2.html), unless you want the educational experience of coding it yourself $\ddot\smile$
112129
1
112132
null
1
367
Why do the first layers of U-Net or CNN generate low-level features? Why not the last layers? What is the logic behind getting low-level features at the beginning of architecture? And yes, high-level features are more "meaningful" but why? Why high-level features are more meaningful than low-level features?
High level-Low Level features in U-NET
CC BY-SA 4.0
null
2022-06-25T15:28:58.967
2022-06-26T04:05:55.283
null
null
133184
[ "deep-learning", "cnn", "feature-engineering", "computer-vision", "feature-extraction" ]
To answer your question let's first go through how CNN works. When we give a CNN an input image, it sees an array of numbers that correspond to the pixel intensities of the input image. The intensity of a pixel can range from 0 (black) to 255. (white). CNN will produce numerical values that indicate the likelihood that an image belongs to a particular class. In order to classify images, CNN searches for basic features like edges and curves before progressing through a number of convolutional layers to more abstract ideas. To identify edges, it scans the image both horizontally and vertically using filters. Consider filters as a weight matrix that is multiplied by pixel intensities to create a new image that preserves the characteristics of the source image.The output value in the corresponding pixel position of the output image is then obtained by adding these products which is called feature map. The output of the previous cnn layer becomes the input of the subsequent cnn layer . In essence, each layer of the input is specifying the places in the original image where specific low-level features can be seen. Dimension reduction is done in Pooling Layer. Now when you apply a set of filters on top of that, the output will be activations that represent higher-level features. As you move through the network and through more convolution layers, you receive activation maps that represent more and more complicated features. So, CNN basically works how we humans look at images first by it's distinguishable features(low-level) and then minute details(high-level). > Why high-level features are more meaningful than low-level features? Take a example of dog and cat they both have 4 legs(low-level features) but what will distinguish them is maybe their eyes, ears (high-level features) which will help them in classifying better. > why high-level features are extracted in the last layers? Why not in the first layer? It's because for large image suppose (224,224) we extract high level then weights will be `224*224*3 = 150528`. This full connectivity is not required and can lead to overfitting. So, we connect neurons to only a local region of the input volume. The spatial extent of this connectivity is a hyperparameter called the receptive field of the neuron or filter size. Rather than using Fully connected layer in first layer it's used in last layer to use high level features extracted from Convolution + Pooling Layer for classifying images as it reduces weights significantly ,so is much faster and effective. Refer:[https://cs231n.github.io/convolutional-networks](https://cs231n.github.io/convolutional-networks) [](https://i.stack.imgur.com/UQHi7.png)
High-level features of a neural network
The image, and variants of it that are commonly used are for illustrative purposes only. They generally do not represent data that has been extracted from real CNNs. The first "Low-level features" part of the diagram is possibly from a real network (I am not sure in this case, it looks more like a constructed filter, e.g. Sobel, to me). That is because it is feasible and relatively easy to interpret the first layer's filter weights directly as images, and the filters do indeed look like the components that they detect. The "Mid-level features" and "High-level features" in your specific diagram have probably been constructed without using a neural network. They are likely to be an artists impression of what the high level features might be. They may have been sampled from real datasets, then just cropped and arranged into the image. Caveat: I cannot find absolute evidence for the specific image being constructed for illustration only, just I suspect this to be the case. It is possible to extract visualisations of features detected by deeper layers. The two common ways to do this are: - Dataset matching. Finding examples in the dataset which trigger a specific neuron to output a high value. This can be isolated to a crop of the original image, because you know the combined sizes of all the filters and pools that occur before layer you are interested in. - Optimising the input image. Using gradient ascent, but instead of changing the weights, make a cost function that scores the neuron you want to visualise and keep adjusting the input until You can get more information from resources such as [this article on feature visualisation](https://distill.pub/2017/feature-visualization/).
112135
1
112145
null
1
38
I am quite new to this neural network stuff, so please bear with me :) TL;DR: I want to train a neural network to predict a scalar score from 32 binary features. Training data is expensive to come by, there is a tradeoff between precision and amount of training samples. Which scenario will likely give me a better result: - Training the network with 100 distinct samples of training data where the output (-1 to 1) is averaged from 100 runs of the same sample, and therefore fairly precise - Training the network with 1000 distinct samples of training data where the output (-1 to 1) is averaged from 10 runs of the same sample, and therefore less precise - Training the network with 10000 distinct samples of training data where the output is just binary (-1 or 1), and therefore very imprecise - Something else? More context: I am creating an AI for an imperfect information 4-player card game with 32 cards. I already have implemented a MinMax-based tree search that solves the perfect information version of the game, i.e. this can deliver me the score that is reached at the end of the game, assuming perfect play of all players, for the case that the full card distribution is known to all players. In reality, of course, each player only knows their own hand of cards. For the purposes of the AI I get around this by repeating the perfect information game many times while randomly assigning the unknown cards. I now want to train a neural network that predicts the win probability that is reached with a given hand of cards (of course, not knowing the cards of the other players). I imagine this would be a value between -1 and 1, where 0 means 50% win probability and 1 means 100% win probability. The input features would be 32 binary values, representing the hand of cards. I want to use my MinMax algorithm to generate the training data for the network. In a perfect world, I would iterate trough 1 Million random hands of cards and determine a precise win probability for each of them by playing 1 Million randomized perfect information games based on that hand. The reality, however, is that my MinMax algorithm is fairly expensive, and I can't improve it much more. So the total amount of perfect information games I can go through is limited. Now I am wondering: How do I maximize the effectiveness of my training data generation process? I guess the tradeoff is: - If I go through many perfect information iterations for each given hand, the win probability in my training data will be fairly close to the 'real' win probability, so very precise - If I go through fewer (or in extreme case, only 1) perfect information iterations for each given hand, the win probability in my training data will be less precise. However, statistically it should still all even out in the end. Plus, I will have a lot more training samples, covering a much wider range of situations. In that context I am wondering which side of this spectrum - precision vs. amount - will give me the better tradeoff. Side note: For my validation data set, of course I will have to determine a fairly precise win probability for at least some samples, where I will probably use more iterations per sample than for the training data.
Scalar predictor - is it better to have a lot of training data that is less precise? Or fewer training data that is more precise?
CC BY-SA 4.0
null
2022-06-26T08:17:52.913
2022-06-26T15:27:20.930
null
null
137405
[ "neural-network", "training" ]
super-interesting question! My approach to the problem would be not to do any preprocessing on the data. This is, feed all the experiments to the network with the target being the 0/1 variable corresponding to lose/win. For example, if you have a dataset like ``` | hand of cards | game output | |-------------------|-------------| | [1, 0, 0, ..., 1] | 1 | | [1, 0, 0, ..., 1] | 0 | | [1, 0, 0, ..., 1] | 1 | | [0, 1, 1, ..., 1] | 1 | | [0, 1, 1, ..., 1] | 1 | | [0, 1, 1, ..., 1] | 1 | ``` instead of training the model with ``` | hand of cards | winning prob | |-------------------|--------------| | [1, 0, 0, ..., 1] | 0.66 | | [0, 1, 1, ..., 1] | 1 | ``` I would train the model with the first dataset, and try to predict the game output. This is, instead of using a regression model using a classification model. Of course, with this approach, your dataset will have entries with the same features and different targets, however, this is not a problem, since you can interpret the output of the classification model as the probability of winning or losing. From my experience, when I've dealt with similar problems, this approach is the one that gave the best results. On the other hand, I would try an approach using decision trees, such as XGBoost or a simple RandomForest, since they use to work better with the kind of data you are dealing with.
Deep Learning accuracy vs Confusion Matrix accuracy
"precision: 0.8492" is on the training data. "val_precision: 0.9168" is on the validation data. "Confusion Matrix" is on the test data. The precision values are different because they are three different data sets. One possible reason that the values are smaller on the test data set is that the model is overfitting to the training dataset.
112136
1
112141
null
1
44
I have a regression task for which my best models has a Mean Absolute Error (MAE) of approximately 15,000. The median value of the target variable is approximately 150,000. I want to report that the error is ~10% of the median. Is there a name for such metric? i.e. dividing the MAE by the median? If not, is there an alternative error that quantifies percentage?
MAE divided by median metric
CC BY-SA 4.0
null
2022-06-26T09:31:04.143
2022-06-26T13:20:31.223
null
null
127992
[ "machine-learning", "regression", "metric", "error-handling" ]
You are encoding a Laplace prior over your targets... now, by itself a loss has no much meaning, however, if you associate it with a distribution, you can understand how good it is the Mean Absolute Error is the MLE of the "variance" $b$ of the Laplace distribution (variance is a bit abusive as term since the variance is actually $2b^2$) However, this is the result of an assumptions... homoscedasticity... which might be true, or might be false In other words, saying that the ratio between the loss and the target is 10%, just explains how much noise on average your data has... with respect to your model This means, that there is no much to say about the ratio, since it depends on a strong assumption, and on the model you are fitting (maybe using a very thick and deep NN you can get much better than that, but that does not mean that it performs as better as reported by the ratio) In my opinion then, you might want to avoid using that "measure" since it's missleading, and does not convey much information (to a human, but you can use that to compare your model to other models)
High RMSE and MAE and low MAPE
The reason is the wider range of your output variable. Consider the following two cases, - Real value was 99, prediction is 101 - Real value was 5520, prediction is 5522 In both cases, the absolute error is 2, but relative error in first case is much larger (2% - 2/101) than second case (0.035% 2/5520). Absolute and relative metrics are measuring different aspects of the prediction. So one model is not better than the other in absolute sense (pun intended). Which metric to value depends on your application. When the outcome range is wide (probably your case) and skewed, relative error measurements are better than absolute error measurements.
112154
1
112156
null
0
98
I see that the MSE metric provided by the model.fit (history) is slightly different from the MSE calculated by model.evaluate? Can anyone help? ``` # fit model Hist = model_rna.fit(x_train, y_train, validation_data=(x_val, y_val), callbacks=[early_stopping], verbose=2, epochs=epochs) # get last trained mse hist = pd.DataFrame(Hist.history) mse_train = [i for i in np.array(hist['mse']).tolist()] print(mse_train[-1]) ``` The result is 0.03789380192756653 ``` # evaluate the trained model model_rna.evaluate(x_train, y_train) ``` The result is: 5/5 [==============================] - 0s 4ms/step - loss: 0.0379 - mse: 0.0379 - acc: 0.0000e+00 [0.03786146640777588, 0.03786146640777588, 0.0] If I do the "manual" calculation: ``` Sum_of_Squared_Errors= np.sum( (y_train - modelo_rna.predict(x_train))**2 ) print(Sum_of_Squared_Errors/len(y_train)) ``` The result is: 0.03786148292614872 This is exactly what I found via model.evaluate() but slightly different of History of model.fit(). Why am I finding this tiny difference? My training and validation samples are fixed.
Small difference in metrics in KERAS for the same model
CC BY-SA 4.0
null
2022-06-27T01:14:41.313
2023-02-03T16:04:45.677
null
null
137441
[ "python", "deep-learning", "neural-network", "keras", "tensorflow" ]
I found explanation here: [https://github.com/tensorflow/tensorflow/issues/29964](https://github.com/tensorflow/tensorflow/issues/29964) [https://stackoverflow.com/questions/59118430/keras-model-evaluate-on-training-and-val-set-differ-from-the-acc-and-val-acc](https://stackoverflow.com/questions/59118430/keras-model-evaluate-on-training-and-val-set-differ-from-the-acc-and-val-acc) [https://stackoverflow.com/questions/44843581/what-is-the-difference-between-model-fit-an-model-evaluate-in-keras](https://stackoverflow.com/questions/44843581/what-is-the-difference-between-model-fit-an-model-evaluate-in-keras) Hope this help others.
Keras P/R metrics at different thresholds during training
You can see the metrics value for each threshold along the fitting process if you explicitely instantiate the corresponding metric class for each threshold, as follows: ``` model.compile( optimizer=keras.optimizers.Adam(learning_rate=1e-2), loss='categorical_crossentropy', metrics=[metrics.Recall(thresholds=0.6), metrics.Recall(thresholds=0.9)]) model.fit(X_train, y_train, epochs=10, validation_data=(X_test, y_test)) ``` and as you can see in the image below, for each epoch you can see that the first recall value (with threshold 0.6) is higher than the second one (threshold 0.9) as expected: [](https://i.stack.imgur.com/RJGQr.png) And for your case, to build the list of metrics objects programatically, where you can now see 3 recalls per epoch: ``` thresholds = [0.6, 0.7, 0.9] metrics_objs_list=[metrics.Recall(thresholds=thr) for thr in thresholds] ``` [](https://i.stack.imgur.com/8SzeZ.png)
112157
1
112158
null
1
28
I am a beginner Python user. My weather data is made up of various variables. It consists of three months of one-minute time data, ambient environmental data (sunlight, ambient temperature, wind speed, etc.), internal environmental data (internal temperature, humidity), smart farm internal control variables (shielding screen, exhaust fan, ceiling fan, etc.), control set temperature (ventilation temperature, heating temperature), energy consumption (target). Through these three-month data, a model that minimizes energy consumption of smart farms should be created. Thereafter, one month's worth of data is additionally provided, and there is no smart farm internal control variable in this data. My model should be used to predict these internal control variables and check the amount of heat supplied to make them as low as possible. (FYI, 1 of the internal control variables shows fan, 0 shows fan stop, and 0 to 100 shows light shielding or heat shielding, 50% open at 50 or 100% open at 100).) I'm having a hard time solving this problem. I would appreciate it if you kindly let me know how to proceed with the analysis and related data or analysis techniques.
I want to make a model that minimizes the heat supply, what should I do?
CC BY-SA 4.0
null
2022-06-27T02:54:12.623
2022-06-27T09:56:42.523
null
null
137444
[ "python", "jupyter" ]
Modeling any industrial process is quite complex because there are a lot of physical and non-linear events. That's why I usually recommend simulating the most important processes first with a scientific tool like Scilab, Simulink, or Labview: [https://www.scilab.org/use-cases/powerful-modeling-and-big-data-analysis-energy-transition](https://www.scilab.org/use-cases/powerful-modeling-and-big-data-analysis-energy-transition) A simulation is interesting, not only to understand how physics between things works, but also to apply a partial or complete machine learning model to optimize energy consumption. Finally, you can apply a machine learning model with Reinforcement Learning: [https://github.com/ADGEfficiency/energy-py](https://github.com/ADGEfficiency/energy-py) [https://github.com/smasis001/smart-grid-peak-tariff-optimization/blob/master/notebooks/OptimizationAlgorithm.ipynb](https://github.com/smasis001/smart-grid-peak-tariff-optimization/blob/master/notebooks/OptimizationAlgorithm.ipynb) Or gaussian: [https://github.com/jaimergp/easymecp](https://github.com/jaimergp/easymecp) These are just examples, there are plenty of existing energy optimization models available on GitHub.
What kind of model should I use?
Use an unsupervised method such as clustering to group users, then assign marketing campaigns that have been used by others within the same cluster.
112165
1
112166
null
4
418
Suppose that we have a dataset that some samples have the save value but with different target. It can be a regression issue or classification. What we should do with them? Should we remove them or that is normal and we can let these data be in training set?
What can be done with same samples with different target?
CC BY-SA 4.0
null
2022-06-27T11:36:11.447
2022-06-28T12:12:54.223
null
null
108053
[ "classification", "regression" ]
This is completely normal; leave them in. An easy example is in an ANOVA problem (which can be viewed as a regression) where multiple subjects in the same group (so same group "value" where group is the lone feature) will have different outcomes in $y$. All this means is that, given your particular feature(s), you cannot get perfect predictions, but you should not expect to be able to get perfect predictions, anyway.
How to set different weights for different training samples?
In scikit-learn, most algorithms (SVM, Decision Trees, SGD, etc.) have a [sample_weight](https://scikit-learn.org/stable/glossary.html?highlight=sample_weight#glossary-sample-props) argument that you can pass when fitting. In your case, you could provide a different weight based on which of the 3 datasets the data point comes from. If the algorithm you want to use doesn't provide the `sample_weight` argument, you can always sample with replacement. Simply put, you give each sample weight, and then you create your dataset by sampling them with replacement. This means that instances with higher weights may appear multiple times in your dataset.
112171
1
112173
null
1
228
I've been trying to implement object detection using a CNN architecture like this: ``` model = keras.Sequential([ keras.layers.Input(shape=(320, 320, 1)), keras.layers.Conv2D(filters=16, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.MaxPool2D((2, 2), strides=2), keras.layers.Conv2D(filters=32, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.MaxPool2D((2, 2), strides=2), keras.layers.Conv2D(filters=64, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.MaxPool2D((2, 2), strides=2), keras.layers.Conv2D(filters=128, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.MaxPool2D((2, 2), strides=2), keras.layers.Conv2D(filters=256, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.MaxPool2D((2, 2), strides=2), keras.layers.Conv2D(filters=512, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.MaxPool2D((2, 2), strides=1, padding="same"), keras.layers.Conv2D(filters=1024, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.Conv2D(filters=1024, kernel_size=(3, 3), activation="leaky_relu", padding="same"), keras.layers.Conv2D(filters=5, kernel_size=(1, 1), activation="relu", padding="same"), ]); model.compile(optimizer=keras.optimizers.Adam(learning_rate=0.000005), loss=yolo_loss, run_eagerly=True); ``` However, while the loss seems to decrease nicely, the validation loss only fluctuates around 300. [Loss vs Val Loss](https://i.stack.imgur.com/HxMfY.png) This model is trained on a dataset of 250 images, where 200 are actually used for training while 50 are used for cross-validation. Why could this be? Could my model be too deep? Do I need to reduce my learning rate even more? Or do I just not have enough data? For reference I am trying to mimic the Tiny YoloV2 architecture shown [here](https://www.researchgate.net/publication/331423658/figure/fig2/AS:962150793244727@1606406033806/Block-diagram-of-architecture-YOLOv2tiny.png)
Loss decreases, but Validation Loss just fluctuates
CC BY-SA 4.0
null
2022-06-27T14:52:21.283
2022-06-28T11:07:46.777
2022-06-27T14:52:45.530
137465
137465
[ "machine-learning", "tensorflow", "cnn", "computer-vision", "object-detection" ]
It looks like your model is overfitting: it's learning from the training dataset, but this learning doesn't apply to the test dataset. You can try to reduce the complexity of the model by simplifying your model -fewer layers, fewer neurons, fewer filters, etc- or by adding regularization -l1, l2, dropout, etc.
Why is my validation loss going up while my validation accuracy also goes up?
It is possible that accuracy and cross entropy increase at the same. For example, for a positive sample your predicted probability could go from 0.4 to 0.1 (still wrong but worse increasing the entropy loss) and for another positive sample your predicted probability could change from 0.49 to 0.51 (changes from wrong to right improving accuracy). The first case would increase your entropy loss while the second would improve accuracy without significantly changing the entropy loss. It's a little difficult to say if cross entropy is not a good metric for your case without knowing any details. But most likely you would want to stick with it for a couple of reasons. For example, the cross entropy gives better probability estimates and has nice properties for training with gradient descent (smooth gradients, training doesn't stall for large values because the `log` off-sets the `exp` in the sigmoid activation etc.).
112179
1
112180
null
0
55
In deterministic (software) systems we have a set of business requirements and ideally, given enough resources, such a system can be fully defined of which are the expected outputs for each inputs or set of actions within a context. The functional QA then is defined to merely assess if the system is following the rules as described. Even usability, endurance, stress and other kind of settings can be fully defined and thus become part of the requirements However how does one test effectively and detect difference between required and actual behaviors of Artificial Intelligence systems ?
How to exercise Quality Assurance Engineering principles to Artificial Intelligence systems?
CC BY-SA 4.0
null
2022-06-27T18:48:06.353
2022-06-30T07:21:46.430
null
null
24885
[ "machine-learning", "ai", "data-quality" ]
Without being sure if the approach makes sense but one could take the various steps of the lifecycle of an Artificial Intelligent system and thus attempt to see how as a Quality Assurance Engineer can ensure that the quality is high in each and every step: - Context Ensure that there are clear specifications and defined requirements before proceeding with any other testing - Collecting training data Ensure data has a variety of sources and necessary variety to avoid biases Ensure that after cleaning enough large dataset has remained Ensure features are sane and within the expected range after cleaning View training data and sample them by eye to see if they make sense Write rule based scripts to check if what is generally expected is found within training data Ensure that training data represent the targets/outputs in as much the same portion as possible - Testing data Ensure that test data are not merely a sample of the training data but at least some of them reflect the business goals (defining expected outcomes as test oracles) and are characteristic examples Ensure that testing data are used only once and then are thrown away otherwise they will be used for the next model Ensure that testing data, even smaller in size, are still a representative portion of the training data Ensure testing data represent the very latest samples that we expect and reflect at least the near future Ensure that the system is tested against totally random inputs (noise) and it is returning outputs that are of low certainty Ensure that using GAN-based metamorphic approaches ([18] PDF - arxiv.org ) will test the AI system using inputs from the same space as the original data Ensure that QAs will have generated by hand a few new test cases and have manually set (using their brain) the expected output Ensure that past scenarios executed in production by real users can be replicated fully to be used as test-input Robustness: refers to the resilience of an AI component towards perturbations Ensure that small variations, perturbations, in the testing sample will yield similar output to the original and will not yield highly different results (ensure non high variance) - Model wise Ensure that a baseline model is always there to compare against Ensure that the proposed model performs better than the baseline model Ensure that the new proposed model performs better than the latest proposed model Ensure that easy to create dummy models using Naive bayes for classification or Linear Regression for regression will not perform better than the proposed model Ensure that a low cost to create rule-based, non ai, model will not work better than the proposed model Ensure that the model should also provide the probability of the certainty of the model that the output is a good/average/bad prediction Ensure that the model is non polarized for a few parameters and therefore non prone to AI-attacks (where some inputs are being changed and change the entire output to our own wish) Ensure that an ensemble model, is not overfitting and it works as good or better than any of the individual underlying models Ensure that using a Teacher-Student model, that the Teacher is slower yet more accurate model than the Student which is expected to be less accurate but more efficient Ensure that self-adaptive and self-learning systems (e.g. Reinforcement Learning) will be able to self-assess themselves to make sure that they are not making Interpretability Ensure that using the training data to build an interpretable model that fits the predictions of our large model, then the interpretation of the parameters make sense Ensure that the model is making predictions based on parameters that the current theory supports and does not have any weird pattern which might lead wrong model - Checking output qualitatively Ensure that the output of the model for very high probability of certainty are truly delivering a good answer Ensure that the bad answers of the model are handled in such a way that the user retains his/her trust to the overall system instead of being misled Ensure that the model generates output that is aligned with the business goals and these answers are useful to the user - Performance / Efficiency Ensure that the model generates answers fast enough in order for the user experience to not be severely impacted by them Ensure that the time to train the new model will not need so large time as to miss the deadlines Ensure that minimal resources are provided to AI models which are being under development in comparison to the AI model which is in production and that these are separated without having one (test/staging environment) consuming resources from the other (production) - Production monitoring Ensure that a feedback system have been set in place in order for users to be able and report unwanted or misleading output of the AI Ensure that the feedback reported by the users is significantly high Ensure that the measured error of the system while in production is within the acceptable levels similar to the ones that were measured during the execution of the model to the testing data Ensure that the measured error of the system remains steady as new inputs are being received and does not have a declining trend - User output Ensure that the output of the model and its certainty probability are reflected correctly in the app Ensure the using as input an instance which is very far away from the current distribution of the model will not allow the user to proceed with using the AI system Ensure that having as output a prediction that has a low certainty will provide the user manual or rule-based alternatives to accomplish his/her tasks - Data privacy: refers to the ability of an AI component to preserve private data information Example: Having a chatbot and having it accumulate knowledge for a certain user, asking this language model information regarding some other user, should not be delivered. Each language model should be agnostic of other language models - Security: measures the resilience against potential harm, danger or loss made via manipulating or illegally accessing AI components Ensure that process of AI model is transparent and that there is a history of the changes that have happened to the deployed AI model - Fairness: Avoid problems in human rights, discrimination law and other ethical issues Ensure that the model output will comply to some "values" which are coded in rule based scripts Example: A Sentiment analysis to never produce that the output of a language model will be very negative
Explainable AI solutions and packages in Python
A few which I am aware of are: Permutation importance, the python package is this [ELI5](https://eli5.readthedocs.io/en/latest/overview.html) Lime, Shap, PDP, and dependency plots you have already covered. To understand AI explainability I would highly suggest reading: - https://www.bankofengland.co.uk/working-paper/2019/machine-learning-explainability-in-finance-an-application-to-default-risk-analysis - Taking this course on Kaggle: https://www.kaggle.com/learn/machine-learning-explainability
112185
1
112188
null
0
159
I'm new to TensorFlow and keras and I'm trying to learn with an example using this code in google's colab ``` import tensorflow as tf import pandas as pd import matplotlib.pyplot as plt from sklearn.compose import make_column_transformer from sklearn.preprocessing import MinMaxScaler, OneHotEncoder from sklearn.model_selection import train_test_split insurance = pd.read_csv("https://raw.githubusercontent.com/stedy/Machine-Learning-with-R-datasets/master/insurance.csv") #Create a column transformer ct = make_column_transformer( (MinMaxScaler(),["age","bmi","children"]), (OneHotEncoder(handle_unknown="ignore"),["sex","smoker","region"]) ) X = insurance.drop("charges",axis=1) y = insurance["charges"] X_train, X_test, y_train, y_test = train_test_split(X,y,test_size=0.2, random_state=42) #Fit the column transformer to our training data ct.fit(X_train) #Transform training and test data with normalization (MinMaxScaler) and OneHotEncode X_train_normal = ct.transform(X_train) X_test_normal = ct.transform(X_test) insurance_model_4=tf.keras.Sequential([ tf.keras.layers.Dense(100), tf.keras.layers.Dense(10), tf.keras.layers.Dense(1) ] ) insurance_model_4.compile(loss=tf.keras.losses.mae, optimizer=tf.keras.optimizers.Adam(),metrics=['mae']) insurance_model_4.fit(tf.expand_dims(X_train_normal, axis=-1), y_train, epochs=200) y_preds = insurance_model_4.predict(X_test_normal) X_test_normal.shape #this gives (268, 11) y_preds.shape #this gives (268,11,1) ``` my issue is that I can't figure out how to get the actual values for predictions in y_preds, I was expecting an array with shape (268,1), that is the 268 predictions for each input in X_test_normal.¿How can I get the value of the predictions? Thanks.
Keras model.predict get the value for the predicion
CC BY-SA 4.0
null
2022-06-28T00:05:48.510
2022-06-28T06:28:27.603
null
null
137483
[ "keras" ]
I don't understand why you would need to expand the dimensions of `X_train_normal` during `.fit()`. Remove that part to simply fit on `X_train_normal`, which would give you the shape of `y_pred` as `(268, 1)` as you expected. So, Replace: `insurance_model_4.fit(tf.expand_dims(X_train_normal, axis=-1), y_train, epochs=200)` with: `insurance_model_4.fit(X_train_normal, y_train, epochs=200)`
Is there a way to get y_pred values from saved Keras model?
If I've figured it out correctly, the answer is no. The point is that your saved model solely contains the network architecture and the parameters it has. What you want relates to the recall phase where you have to provide input to get output. This means that you need input data to be fed to your network in order to get output. What you want can be done using another approach. First, load your network and feed your data to your model. After that, get the outputs and store the inputs and outputs alongside each other using `Numpy` save method or maybe .h5 format. --- The flow for achieving the `y_pred` can be like the following sequence of actions: - Load your model. - Feed your data to your model and get y_pred. - define a Numpy array of inputs and a Numpy array of outputs. - Store inputs, real outputs and y_preds using the methods which are available. - Later, when you want to make your confusion matrix, you can load your inputs and outputs, and the real outputs to make your matrix.
112219
1
112229
null
0
440
I understand that Catboost regressor uses target-based encoding to convert categorical features to numerical features when training. But how does Catboost deal with categorical features at predict time when the labels are completely unknown? How does an object at predict time go down the Catboost decision trees if the decision trees are expecting to see categorical feature values as numbers? I tried looking at the official documentation but could only find when the encoding was done during training when the labels are available.
How does Catboost regressor deal with categorical features at predict time?
CC BY-SA 4.0
null
2022-06-29T00:40:48.597
2022-06-30T14:29:40.337
null
null
137526
[ "regression", "encoding", "gradient-boosting-decision-trees", "catboost" ]
In a simplified way of putting it, we substitute the category id with the mean value of the training set target for this category. CatBoost implements some tricks like only using the preceding values when encoding the train set, but transforming the test set will use the whole train statistics anyway ([https://github.com/catboost/catboost/issues/838](https://github.com/catboost/catboost/issues/838)). What happens when a previously unseen category is encountered in the test set? According to [https://towardsdatascience.com/categorical-features-parameters-in-catboost-4ebd1326bee5](https://towardsdatascience.com/categorical-features-parameters-in-catboost-4ebd1326bee5) unseen categories receive a value based upon prior (controlled by CTR arguments). In other words, same as [https://catboost.ai/en/docs/concepts/algorithm-main-stages_cat-to-numberic](https://catboost.ai/en/docs/concepts/algorithm-main-stages_cat-to-numberic) with `countInClass` being zero. (category_encoders implementation of `CatBoostEncoder()` seems to just use the average train target value.)
Catboost Categorical Features Handling Options (CTR settings)?
I found out that in order to set the `ctr` parameters and all the components one should pass a list of strings, each string should contain the `ctrType` and one of its component: - The first word of the string should be a ctrType for example Borders: (click here for catboost parameters) - Then one component of the ctrType should follow. For example TargetBorderType=5. - All together 'Borders:TargetBorderType=5'. - Repeat the procedure to set an other component and add the new string to the list. Example with two components set: ``` simple_ctr = ['Borders:TargetBorderType=Uniform', 'Borders:TargetBorderCount=50'] ```
112226
1
112335
null
1
414
I have a learning to rank task at hand and I want to use the [lightgbm implementation of LambdaMART](https://lightgbm.readthedocs.io/en/latest/pythonapi/lightgbm.LGBMRanker.html#). I'm also following this [notebook](https://everdark.github.io/k9/notebooks/ml/learning_to_rank/learning_to_rank.html). ``` param = { "task": "train", "num_leaves": 255, "min_data_in_leaf": 1, "min_sum_hessian_in_leaf": 100, "objective": "lambdarank", "metric": "ndcg", "ndcg_eval_at": [1, 3, 5, 10], "learning_rate": .1, "num_threads": 2} res = {} bst = lgb.train( param, train_data, valid_sets=[valid_data], valid_names=["valid"], num_boost_round=50, evals_result=res, verbose_eval=10) ``` In the params, the objective is set to `lambda-rank` which is another learning to rank algorithm. My question is, how do I implement `LambdaMART` with `lightgbm` ? What set of paramters should I use to implement `LambdaMART` with `lightgbm` ?
How can I implement lambda-mart with lightgbm?
CC BY-SA 4.0
null
2022-06-29T06:46:19.663
2022-07-02T16:43:38.300
null
null
58736
[ "machine-learning", "python", "information-retrieval", "lightgbm", "learning-to-rank" ]
Looks like the implementation of lambdaMART in the notebook referenced in the question is correct. From the paper titled, [From RankNet to LambdaRank to LambdaMART: An Overview](https://www.microsoft.com/en-us/research/wp-content/uploads/2016/02/MSR-TR-2010-82.pdf), it is clearly mentioned in the first line of the paper that : > LambdaMART is the boosted tree version of LambdaRank, which is based on RankNet. So, the code that's pasted above clearly says that, the objective function is LambdaRank. There is one more arguement called `boosting_type` which is set to `gbdt` by default. The `LambdaRank + gbdt` is what `LambdaMART` is in essence. So, just pasting the above code for completion sake: ``` param = { "task": "train", "num_leaves": 255, "min_data_in_leaf": 1, "min_sum_hessian_in_leaf": 100, "objective": "lambdarank", "boosting_type": "gbdt", "metric": "ndcg", "ndcg_eval_at": [1, 3, 5, 10], "learning_rate": .1, "num_threads": 2 } res = {} bst = lgb.train( param, train_data, valid_sets=[valid_data], valid_names=["valid"], num_boost_round=50, evals_result=res, verbose_eval=10) ``` This is how we can use lightgbm to train lambdaMART.
How does LightGBM deal with value scale?
Generally, in tree-based models the scale of the features does not matter. This is because at each tree level, the score of a possible split will be equal whether the respective feature has been scaled or not. You can think of it like here: We're dealing with a binary classification problem and the feature we're splitting takes values from 0 to 1000. If you split it on 300, the samples <300 belong 90% to one category while those >300 belong 30% to one category. Now imaging this feature is scaled between 0 and 1. Again, if you split on 0.3, the sample <0.3 belong 90% to one category while those >0.3 belong 30% to one category. So you've changed the splitting point but the actual distribution of the samples remains the same regarding the target variable.
112232
1
112265
null
1
45
I was wondering if tuning a seed with cross-validation in order to maximize the performance of an algorithm heavily based on a randomness factor is a good idea or not. I have created an Extra Tree Classifier which performs very bad with basically every seed except the one I found by using grid search. I think this is not a problem because I really don't care about how the conditions were set as long as they classify correctly, therefore I should have the ability to try running the algorithm with different seeds until it works, in order to find the best set of casual conditions for each split. Also, note that the test is done with Leave One Out Cross Validation. Am I right?
Grid Searching seed in randomized machine learning
CC BY-SA 4.0
null
2022-06-29T10:15:41.033
2022-06-30T08:39:32.150
null
null
137317
[ "machine-learning", "random-forest", "grid-search", "gridsearchcv", "randomized-algorithms" ]
It's definitely an error to select an "optimal" random seed. If performance depends a lot on the random seed, it means that the the model always overfits, i.e. the patterns used by the model depend on the specific subset used as training data and the performance on the test set is due mostly to chance. In your scenario, the model doesn't really work better with seed X, it happens that this particular seed leads to good performance on this particular test set. - It wouldn't work for another model trained with the same seed on a different subset - It wouldn't work for a different test set. Also I assume that you didn't apply the correct methodology for tuning hyper-parameters, otherwise you would certainly have seen the problem. When applying grid search, the performance of the best parameters (and only these) should be estimated again on a fresh test set (because parameter tuning is a kind of training, so performance on the training set is not reliable). > I think this is not a problem because I really don't care about how the conditions were set as long as they classify correctly, This is a mistake: the classifier doesn't classify correctly, actually the performance you obtain is not reliable, it's an artifact. Testing the model on a fresh test set is the only way to obtain a reliable estimate of the performance.
Grid_search (RandomizedSearchCV) extremely slow with SVM (SVC)
It looks like your current approach is taking a long time because you are trying to search a large space of hyperparameters. One way to make the hyperparameter search more efficient is to use a smaller number of values for each hyperparameter, as this will reduce the total number of combinations that need to be tried. There are several ways to optimize the hyperparameter tuning process for an SVM, including the following: - Use a smaller sample of the dataset for hyperparameter tuning, as the processing time will be proportional to the size of the dataset. - Use a more efficient algorithm for hyperparameter tuning, such as Bayesian optimization or genetic algorithms, which can find the optimal hyperparameters in a more efficient manner. - Use a more efficient implementation of the SVM algorithm, such as the LibSVM library, which can be faster than the default SVM implementation in scikit-learn. - Try different combinations of hyperparameters manually, rather than using grid search or randomized search, which can be computationally intensive. - Use a more efficient kernel, such as the linear kernel, which can be faster to train than more complex kernels such as the polynomial or RBF kernels. - Use a smaller number of hyperparameters, as the processing time will be proportional to the number of hyperparameters being tuned. - Use a coarser grid for hyperparameter tuning, such as increasing the stepsize for the values of the hyperparameters, as this can reduce the number of combinations to be tested. Overall, it is important to carefully select and optimize the hyperparameters for an SVM to improve its performance and reduce the processing time. Also, check - [SVC classifier taking too much time for training](https://stackoverflow.com/a/54004026/14045537)
112252
1
112254
null
1
418
I am using keras and Jupyter notebook and want to make my results reproducible every time I ran it. This is the tutorial I used [https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/](https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/). I copied his codes in Stacked LSTMs with Memory Between Batches part. This is my cell1 in Jupyter Notebook, I only used CPU to avoid randomness brought by GPU, making sure the same results can be reproduced every time. ``` import os os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # see issue #152 os.environ["CUDA_VISIBLE_DEVICES"] = "" from tensorflow.python.client import device_lib device_lib.list_local_devices() ``` This is cell2, stricting following the suggestions from this question [https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras](https://stackoverflow.com/questions/32419510/how-to-get-reproducible-results-in-keras) ``` # Seed value # Apparently you may use different seed values at each stage seed_value= 0 # 1. Set the `PYTHONHASHSEED` environment variable at a fixed value import os os.environ['PYTHONHASHSEED']=str(seed_value) # 2. Set the `python` built-in pseudo-random generator at a fixed value import random random.seed(seed_value) # 3. Set the `numpy` pseudo-random generator at a fixed value import numpy as np np.random.seed(seed_value) import tensorflow as tf tf.compat.v1.set_random_seed(seed_value) tf.random.set_seed(seed_value) ``` This is cell3 from his codes, (changed it a little for example, from `keras` to `tensorflow.keras`) ``` import numpy import matplotlib.pyplot as plt from pandas import read_csv import math from tensorflow.keras.models import Sequential, Model from tensorflow.keras.layers import Dense, LSTM from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error # convert an array of values into a dataset matrix def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return numpy.array(dataX), numpy.array(dataY) # load the dataset dataframe = read_csv('airline-passengers.csv', usecols=[1], engine='python') dataset = dataframe.values dataset = dataset.astype('float32') # normalize the dataset scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(dataset) # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] # reshape into X=t and Y=t+1 look_back = 3 trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = numpy.reshape(trainX, (trainX.shape[0], trainX.shape[1], 1)) testX = numpy.reshape(testX, (testX.shape[0], testX.shape[1], 1)) ``` And this is cell4, ``` # create and fit the LSTM network batch_size = 1 model = Sequential() model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True, return_sequences=True)) model.add(LSTM(4, batch_input_shape=(batch_size, look_back, 1), stateful=True)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') for i in range(100): model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False) model.reset_states() # make predictions trainPredict = model.predict(trainX, batch_size=batch_size) model.reset_states() testPredict = model.predict(testX, batch_size=batch_size) # invert predictions trainPredict = scaler.inverse_transform(trainPredict) trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) testY = scaler.inverse_transform([testY]) # calculate root mean squared error trainScore = math.sqrt(mean_squared_error(trainY[0], trainPredict[:,0])) print('Train Score: %.2f RMSE' % (trainScore)) testScore = math.sqrt(mean_squared_error(testY[0], testPredict[:,0])) print('Test Score: %.2f RMSE' % (testScore)) # shift train predictions for plotting trainPredictPlot = numpy.empty_like(dataset) trainPredictPlot[:, :] = numpy.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = numpy.empty_like(dataset) testPredictPlot[:, :] = numpy.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(dataset)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() ``` However, I still found that the training loss is different every time I ran cell4. But I found that, as long as I add cell2's contents to cell4, I can get the same training loss curve every time I ran cell4. So my question is, to reproduce my results, why should I set the random seed every time I run my model in the cell(cell4), instead of just setting it in the beginning of my jupyter notebook once and for all?
Reproduce Keras training results in Jupyter Notebook
CC BY-SA 4.0
null
2022-06-29T19:45:18.860
2022-06-29T20:30:55.817
2022-06-29T20:01:20.400
130605
130605
[ "keras", "tensorflow", "jupyter" ]
Did you set the same random seed at each step? The seed works well for the first function, but then it is lost in the next ones because NumPy applies a global seed reset automatically. For example, you can do: ``` def reset_seed(seed_value): np.random.RandomState(seed_value) tf.compat.v1.set_random_seed(seed_value) tf.random.set_seed(seed_value) for i in range(100): reset_seed(seed_value) model.fit(trainX, trainY, epochs=1, batch_size=batch_size, verbose=2, shuffle=False) model.reset_states() ``` Otherwise, you can also use a function with a random number generator: [https://albertcthomas.github.io/good-practices-random-number-generators/](https://albertcthomas.github.io/good-practices-random-number-generators/)
Why can't I reproduce my results in keras using random seed?
Are you using a CPU or a GPU? If you are using a GPU, there is an additional source of randomness. To confirm this point, you can try to use TensorFlow with CPU only, or disable Cuda DNN but the model will be twice longer: ``` THEANO_FLAGS="optimizer_excluding=conv_dnn" python your_file.py THEANO_FLAGS="dnn.conv.algo_bwd_filter=deterministic,dnn.conv.algo_bwd_data=deterministic" python your_file.py ``` Source: [https://github.com/keras-team/keras/issues/2479#issuecomment-213987747](https://github.com/keras-team/keras/issues/2479#issuecomment-213987747)
112287
1
112790
null
2
35
I am training a very deep neural network (Panoptic-DeepLab) with a ResNet34 backbone on Google Colab on CityScapes dataset for Panoptic Segmentation, and noticed that, with a big crop size, the batch size has to be decreased to 1 image per batch, otherwise CUDA out of memory issues start to occur. While I know that this can create skewness in the training and it will likely be very hard to attain good convergence, can I ask this question in general to the experts: how valid is a batch size of 1 generally considered in image-based processing? The images in consideration can be considered large (high resolution). The optimizer used is Adam alongwith a warm up polynomial learning rate (with base around 0.00005), and 90k iterations. (I understand that it would possibly be a good idea to try out a smaller crop size and bigger batch size, but would like to know the feedback from the community anyway)
Is batch size of 1 a valid choice for a very deep neural network with high memory requirement?
CC BY-SA 4.0
null
2022-06-30T23:04:01.623
2022-07-19T10:44:24.063
null
null
133763
[ "deep-learning", "training", "image-segmentation", "convergence", "memory" ]
After more research, I found that a batch size of 1 is quite common in deep-learning image processing use-cases where there are high memory/GPU requirements for model training. In fact, it gives better results at times.
batch_size in neural network
I guess you have a confusion here. The None part represents the number of samples. For example if you have a neural network with architecture 100-50-10, it means that you have - (None,100) : input layer shape - (100,50) : shape for weights connecting input to hidden layer - (None,50): shape for hidden layer given by (None,100)*(100,50) matrix multiplication - (None,50): shape after nonlinearity application. - (50,10): shape for the shape matrix between hidden and output layer - (None,10) : output layer shape (None,50)*(50,10) matrix multiplication So if youre feeding a single input sample the shapes would be: ``` (1,100)[Input] => (1,100)(100,50) = (1,50)[Hidden Layer] => (1,50)*(50,10)=(1,10)[Output Layer] ```
112298
1
112299
null
1
171
I have a 3 data column $(X, Y, Z)$ ranges from $(min, max)$. For example, $X = (0, 5)$, $Y=(0, 3)$, $Z=(0, 2)$. By using them I need to create a numpy array in the form of $[(0, 0, 0), (0, 0, 1), (0, 0, 2), (0, 1, 0), (0, 1, 1), (0, 1, 2), (0, 2, 0)...]$ So in total there will be $6 \times 4 \times 3 = 72$ data points. Is there a simple command to do this ?
Creating a grid type 3D data array from data points
CC BY-SA 4.0
null
2022-07-01T10:54:35.987
2022-07-01T11:15:38.423
null
null
78791
[ "python", "data", "numpy" ]
You can use [itertools.product](https://docs.python.org/3/library/itertools.html#itertools.product) to get a possible combinations of x, y, and z and then convert to resulting list to a `numpy` array: ``` import itertools import numpy as np x = range(0, 5 + 1) y = range(0, 3 + 1) z = range(0, 2 + 1) np.array(list(itertools.product(x, y, z))) ```
Is there any good practice to cluster 3D data array?
Given that allmpst all clustering algorithms assume data is unordered, reshaping the data into some n*p format ist indeed appropriate. If you want to take positions into account, you'll have to encode them as additional features (which can prove to be tricky because of scaling and feature weighting). But don't treat clustering as a black box. You may have some particular goal in mind, and adequately preparing the data is a must for clustering. Consider k-means: it searches a least-squares appropximation. It's you job to prepare the data in a way that least-squares on these features is useful.
112334
1
112337
null
1
68
According the LSTM design: [](https://i.stack.imgur.com/2b4Rs.png) The hidden state (ht) is output twice (1 and 2 in the picture). - If they are the same, why we need them twice ? - Is there a different use for each one of them ? - According to > nn.lstm there are 3 outputs (output, h_n, c_n). I didnt understand what is the different between output and h_n ? (Doesn't they need to be the same) ?
In LSTM why h_t output twice?
CC BY-SA 4.0
null
2022-07-02T16:25:09.133
2022-07-02T20:46:51.700
null
null
93617
[ "deep-learning", "lstm", "pytorch" ]
ht was initially defined as a differential function, which value is the same in output and in the next LSTM cell. LSTM uses the previous steps in a sequential way and chooses whether to memorize or forget according to h(t-1) and C(t-1) and the inner weights, to set h(t) and C(t). h(t) is the cell's output, and it is sent to the next cell in order to keep a sequential logic. It is quite complex to explain in a few words but let's say that the forget and memorize weights are set during the training process thanks to an auto-regulated system (named "the constant error carrousel") that takes into account several scenarios and at the same time avoids the neurons to diverge during training. See main publication: [https://www.researchgate.net/publication/13853244_Long_Short-term_Memory](https://www.researchgate.net/publication/13853244_Long_Short-term_Memory) Note: Google spent 10 years understanding LSTM's publication. It's very complex but very interesting.
h in LSTM increasing in size?
`o(t)` is not the result of concatenation of `h(t-1)` and `x(t)`, but a simple matrix multiplication. See wikipedia for further details: [https://en.wikipedia.org/wiki/Long_short-term_memory](https://en.wikipedia.org/wiki/Long_short-term_memory) [](https://i.stack.imgur.com/rMGfP.jpg)
112354
1
112363
null
9
1130
I am 1 year old in ML and have been using jupyter notebook to build static models all these days, do some analysis and present my results to the bosses as it was all POC. Now, we would like to scale the solution to become automatic and be able to feed the real data stream automatically and allow model to learn automatically without me doing batch based update. Since, all this is completely new to me and am not a software developer/engineer. Can you help me with the below queries a) Is there any online courses/institutes/books for beginners like me? b) Is there any python packages that can allow for online learning of models and update the results etc? Or what are the list of packages that I can refer for MlOps purpose? c) I would like to learn via tutorial of IRIS dataset etc. Where they can walk us through how once model is built, it is taken to production, handling preprocessing of future data inputs etc.
MLOps for beginner
CC BY-SA 4.0
null
2022-07-03T14:05:15.177
2022-07-04T07:01:11.923
2022-07-04T07:01:11.923
64876
64876
[ "machine-learning", "deep-learning", "neural-network", "predictive-modeling", "mlops" ]
a. For a beginner I would suggest the [fullstackdeeplearning](https://fullstackdeeplearning.com/spring2021/lecture-6/) course, it's a modern overview of tools and best practices for ML in production. As you can see below, there are a lot of moving pieces. [](https://i.stack.imgur.com/rBfCn.png) b. What you are asking for can be done with Spark + Airflow. In particular Airflow (or similar tools such as Luigi) allows to create very customised data pipelines. The learning curve is a bit steep, but there are good resources available online. c. The course above should answer your questions, as the data side is not really deep learning specific, but can apply also to data-science workflows.
Where to learn which ML task is most appropriate for a problem?
This should help you. I have used it many times. It's very straightforward. [https://medium.com/analytics-vidhya/which-machine-learning-algorithm-should-you-use-by-problem-type-a53967326566](https://medium.com/analytics-vidhya/which-machine-learning-algorithm-should-you-use-by-problem-type-a53967326566) [](https://i.stack.imgur.com/rhOzC.png)
112367
1
112443
null
0
18
I'm new to tensorflow and deep-learning, I wish to get a general concept by a beginner's demo, i.e. training a (int-)number counter, to indicate the most repeated number in a set (if the most repeated number is not unique, the smallest one is chosen). e.g. if `seed=[0,1,1,1,2,7,5,3]`(int-num-set as input), then `most = 1`(the most repeated num here is `1`, which repeated 3 times); if `seed = [3,3,6,5,2,2,4,1]`, then `most = 2` (both 2 and 3 repeated most/twice, then the smaller `2` is the result) Here I didn't use the widely used demos like image classifier or MNIST data-set, for a more customized perspective and a easier way to get data-set. so if this is not a appropriate problem for deep-learning, please help me know it. The following is my code and apparently the result is not as expected, may I have some advice? like: - is this kind of problems suitable for deep-learning to solve? - is the network-struct appropriate for this problem? - is the input/output data(or data-type) right for the network? ``` import random import numpy as np para_col = 16 # each (num-)set contains 16 int-num para_row = 500 # the data-set contains 500 num-sets for trainning para_epo = 100 # train 100 epochs # initial the size of data-set for training x_train = np.zeros([para_row, para_col], dtype = int) y_train = np.zeros([para_row, 1], dtype = int) # generate the data-set by random for row in range(para_row): seed = [] for col in range(para_col): seed.append(random.randint(0,9)) most = max(set(seed), key = seed.count) # most repeated num in seed(set of 16 int-nums between 0~9) # fill in data for trainning-set x_train[row] = np.array(seed,dtype = int) y_train[row] = most # print(str(most) + " @ " + str(seed)) # define and training the network import tensorflow as tf # a simple network according to some tutorials model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(para_col, 1)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # train the network model.fit(x_train, y_train, epochs = para_epo) # test the network seed_test = [5,1,2,3,4,5,6,7,8,5,5,1,2,3,4,5] # seed_test = [1,1,1,3,4,5,6,7,8,9,0,1,2,3,4,5] # seed_test = [9,0,1,9,4,5,6,7,8,9,0,1,2,3,4,5] x_test = np.zeros([1,para_col],dtype = int) x_test[0] = np.array(seed_test, dtype = int) most_test = model.predict_on_batch(x_test) print(seed_test) for o in range(10): print(str(o) + ": " + str(most_test[0][o]*100)) ``` the training result looks like converged according to ``` ... Epoch 97/100 16/16 [==============================] - 0s 982us/step - loss: 0.1100 - accuracy: 0.9900 Epoch 98/100 16/16 [==============================] - 0s 1ms/step - loss: 0.1139 - accuracy: 0.9900 Epoch 99/100 16/16 [==============================] - 0s 967us/step - loss: 0.1017 - accuracy: 0.9860 Epoch 100/100 16/16 [==============================] - 0s 862us/step - loss: 0.1082 - accuracy: 0.9840 ``` but the printed output looks unreasonable and random, the following is a result after one of the trainings ``` [5, 1, 2, 3, 4, 5, 6, 7, 8, 5, 5, 1, 2, 3, 4, 5] 0: 0.004467500184546225 1: 0.2172523643821478 2: 2.9886092990636826 3: 1.031165011227131 4: 69.71694827079773 5: 12.506482005119324 6: 1.0543939657509327 7: 0.2930430928245187 8: 8.086799830198288 9: 4.100832715630531 ``` actually `5` is the right answer (repeated five times and most), but is the printed output indicating `4` is the answer (at a probability of `69.7%`)?
tensorflow beginner demo, is that possible to train a int-num counter?
CC BY-SA 4.0
null
2022-07-04T04:24:02.757
2022-07-07T06:02:52.283
2022-07-07T06:02:52.283
135707
137705
[ "deep-learning", "tensorflow" ]
This type of problem is not really suited to deep learning. Each node in the neural network expects numeric input, applies a linear transformation to it, followed by a non-linear transformation (the activation function), so your inputs need to be numeric. While your inputs are numbers, they are not being used numerically, as the inputs could be changed to letters or symbols. Also, your network looks like it is overfitting. It is very large for the number of inputs and so is probably just memorising the training data, which is why you appear to good results on your training data. Tensorflow has a tensorflow-datasets package (installed separately from the main TF package) which provides easy access to a range of datasets (see [https://www.tensorflow.org/datasets](https://www.tensorflow.org/datasets) for details). Maybe look here to find a suitable dataset to use.
Tensorflow 2.0 - Layer with fixed input
This feels like a bit of a hack, but I was able to infer something from an answer on another question: [https://stackoverflow.com/a/46466275/6182971](https://stackoverflow.com/a/46466275/6182971) It works if I change the first line above from: ``` ones = tf.ones(shape=(1,1)) ``` to: ``` ones = layers.Lambda(lambda x: tf.ones(shape=(1,1)))(features_input) ``` Even though the `Lambda` layer is returning a constant, passing in `features_input`, which is the main training data connects the `tf.ones` constant to the network inputs, which seems to be sufficient.
112385
1
112388
null
1
35
"The same value in all the parameters makes all the neurons have the same effect on the input, which causes the gradient with respect to all the weights is the same and, therefore, the parameters always change in the same way." Taken from my course.
What does this statement relative to neural network weight initialization mean?
CC BY-SA 4.0
null
2022-07-04T16:16:58.457
2022-07-04T23:07:01.047
2022-07-04T16:17:29.703
137197
137197
[ "neural-network" ]
Consider the following image of a simple neural network. Note that the network uses a linear activation function and that there are no bias terms (this makes the intuition easier). [](https://i.stack.imgur.com/VYprB.jpg) Each path from the input to the output is as follows $$ f(x) = (x*w1)*w4 = (x*0.5)*0.5 $$ $$ f(x) = (x*w2)*w5 = (x*0.5)*0.5 $$ $$ f(x) = (x*w3)*w6 = (x*0.5)*0.5 $$ When you perform gradient descent, the change in the weights will always be the same as each path is identical.
What is the purpose of setting an initial weight on deep learning model?
This is greatly addressed in the [Stanford CS class CS231n](http://cs231n.github.io/neural-networks-2/#init): > Pitfall: all zero initialization. Lets start with what we should not do. Note that we do not know what the final value of every weight should be in the trained network, but with proper data normalization it is reasonable to assume that approximately half of the weights will be positive and half of them will be negative. A reasonable-sounding idea then might be to set all the initial weights to zero, which we expect to be the “best guess” in expectation. This turns out to be a mistake, because if every neuron in the network computes the same output, then they will also all compute the same gradients during backpropagation and undergo the exact same parameter updates. In other words, there is no source of asymmetry between neurons if their weights are initialized to be the same. There are several weight initialization strategies; each one is best suited for a type of activation function. For instance, [Glorot's initialization](http://www.jmlr.org/proceedings/papers/v9/glorot10a/glorot10a.pdf) aims at not saturating sigmoid activations, while [He's initialization](http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf) is meant for Rectified Linear Units (ReLUs).
112485
1
112497
null
1
225
Pointwise Mutual Information or PMI for short is given as ![Formula1](https://latex.codecogs.com/svg.image?%5Cfrac%7BP(bigram)%7D%7BP(1st%20Word)%20*%20P(2nd%20Word)%7D) Which is the same as: ![Formula2](https://latex.codecogs.com/svg.image?log_%7B2%7D%5Cfrac%7B%5Cfrac%7BBigramOccurrences%7D%7BN%7D%7D%7B%5Cfrac%7B1stWordOccurrences%7D%7BN%7D%20*%20%5Cfrac%7B2ndWordOccurrences%7D%7BN%7D%7D) Where BigramOccurrences is number of times bigram appears as feature, 1stWordOccurrences is number of times 1st word in bigram appears as feature and 2ndWordOccurrences is number of times 2nd word from the bigram appears as feature. Finally N is given as number of total words. We can tweak the following formula a bit and get the following: ![Formula3](https://latex.codecogs.com/svg.image?log_%7B2%7D%5Cfrac%7BBigramOccurrences*%20N%7D%7B1stWordOccurrences%20*%202ndWordOccurrences%7D) Now the part that confuses me a bit is the N in the formula. From what I understand it should be a total number of feature occurrences, even though it is described as total number of words. So essentially I wouldn't count total number of words in dataset (as that after some preprocessing doesn't seem like it makes sense to me), but rather I should count the total number of times all bigrams that are features have appeared as well as single words, is this correct? Finally, one other thing that confuses me a bit is when I work with more than bigrams, so for example trigrams are also part of features. I would then, when calculating PMI for a specific bigram, not consider count of trigrams for N in the given formula? Vice-versa when calculating PMI for a single trigram, the N wouldn't account for number of bigrams, is this correct? If I misunderstood something about formula, please let me know, as the resources I found online don't make it really clear to me.
How to calculate Pointwise Mutual Information (PMI) when working with multiple ngrams
CC BY-SA 4.0
null
2022-07-07T13:23:43.810
2022-07-08T08:04:15.070
null
null
137836
[ "classification", "nlp", "text" ]
The application of PMI to text is not so straightforward, there can be different methods. PMI is originally defined for a standard sample space of joint events, i.e. a set of instances which are either A and B, A and not B, not A and B or not A and not B. In this setting $N$ is the size of the space, of course. So the question when dealing with text is: what is the sample space? - Sometimes it makes sense to consider specific units of text as instances, for example small documents (e.g. tweets) or sentences. In this option the different cases are whether word A and B appear at least once individually/jointly in the document, and then we count the number of documents as frequency. $N$ is the total number of documents, of course. - Sometimes there's no natural unit to consider, only the full text. In this case the sample space is defined by a moving windows of length $m$ in the text, i.e. the window starting at position 1, 2, 3, etc. Every window is a 'document' which can have combination of [not] A/B.
How to feed data for ngram model?
You should definitely use a sliding window. An n-gram language model represents the probabilities for all the n-grams. If it doesn't see a particular n-gram in the training data, for example "sliding cat", it will assume that this n-gram has probability zero (actually zero probabilities are usually replaced with very low probability by smoothing, in order to account for out-of-vocabulary n-grams). This would result in a zero probability for a sentence which was actually in the training case (or a very low probability with smoothing). Also it's common to use "padding" at the beginning and end of every sentence, like this: ``` #SENT_START# The The sliding sliding cat cat is is not ... to dance dance #SENT_END# ``` This gives the model indications about the words more likely to be at the beginning or end (it also balances the number of n-grams by word in a sentence: exactly $n$ even for the first/last word).
112492
1
112493
null
0
24
Suppose if I have 2 sentences: "My name is Alex" "Alex is my name" If I am using a RNN, After processing both the sentences, Will the final output vector be the same? Because RNN basically shares the weights, And both the sentences have the same number of words,Shouldnt the final output after processing the last word in both sentences be the same ? I am well aware that when processing each word in RNN, The next word will be based on the current and previous processed words. But what about the full processing of both these sentences with same words. Will they have same final output?
RNN basic doubt
CC BY-SA 4.0
null
2022-07-07T18:43:48.977
2022-07-07T20:32:00.347
null
null
96653
[ "machine-learning", "neural-network", "nlp", "lstm", "rnn" ]
No, they will not have the same final output. Although the weights of the RNN are the same for each time step and the words are the same, their order is not and therefore the inputs and hidden states received at each time step will be different, and so will their outputs. You said it yourself: `The next word will be based on the current and previous processed words.` . The next and previous words for each time step are not the same in two sentences with the same words but in different order.
Transformers vs RNN basic doubt
There are multiple concepts mixed in your question. - Contextual vs. non-contextual word embeddings: word2vec is a non-contextual approach to obtaining token embeddings. This means that a specific word has the same vector representation regardless of the other words in the sentence it appears. BERT, on the other hand, can be used to obtain contextual representations, because the representations of a token depend directly on the other words in the sentence. - Contextual word embeddings with LSTMs. You can obtain contextual word embeddings with LSTMs. Actually, BERT has 2 predecessors that are just that. These models are ULMFit and ELMo. Both are bidirectional LSTMs. The fact that they are bidirectional is important here, otherwise, the representations would only be contextual for the words to the right of each word. - Using BERT or LSTMs for classification and other tasks. Both BERT and LSTMs are suitable to perform text classification. In the case of BERT, the sentence-level representation is obtained by prefixing the sentence with the special token [CLS] and taking the representations obtained at that position as sentence representation (this is trained with the next-sentence prediction task in the BERT training procedure). In the case of LSTMs, the sentence-level representation is usually obtained either as the last output of a unidirectional LSTM or by global pooling over all the representations of a bidirectional LSTM.
112507
1
112509
null
1
74
When I extract my features from my CNN, it doesn't look like this: ![This](https://i.stack.imgur.com/yQb1d.png) And those pictures are not just representation. From [this article](https://becominghuman.ai/what-exactly-does-cnn-see-4d436d8e6e52) it can be seen that these features are actual extracted features from real CNN. However, the features that I extracted look exactly like this: ![This](https://i.stack.imgur.com/QMa3O.png) Why is this representation not like the first picture? Is it the correct one?
Which representation of CNN feature maps is correct?
CC BY-SA 4.0
null
2022-07-08T10:41:39.280
2022-07-08T11:54:43.650
2022-07-08T11:26:51.817
70391
133184
[ "deep-learning", "image-classification", "convolutional-neural-network" ]
I don't think you are comparing like with like. In the left-most panel of the first image, you are seeing the weights in each kernel (one channel from one convolutional layer). These are yellow in the figure below. The size of the kernels is determined by the hyperparameters of the network; they might have a size like 3 × 3 or 31 × 31. I'm not 100% certain about the other two panels; the right-most looks more like convolutional products than filters. In the second, you are certainly looking at the activations when given a particular input example. These images are the result of convolving the input with the kernels; this part is pink in the figure below. Their size depends on the input image size, the kernel size, and the convolution parameters. From the article you linked to: [](https://i.stack.imgur.com/gWvJS.png)
How do CNNs find different feature maps?
You shouldn't initialize all the weights with zero, use random initialization instead. Otherwise you can't break the symmetry and all the outputs in the network would be same.
112536
1
112541
null
2
65
First I tried creating the training/testing datasets using sklearn train_test_split function like the following, ``` x_train, x_test, y_train, y_test = train_test_split(x_scaled, y, test_size=0.5, random_state = 1) ``` And on the second test I tried splitting the two datasets by half without any kind of randomization... ``` x_train, x_test, y_train, y_test = x_scaled[:int(total_rows/2)],x_scaled[int(total_rows/2):],y[:int(total_rows/2)],y[int(total_rows/2):] ``` On first test, The model accuracy was like the following, ``` loss: 0.1951 - accuracy: 0.7057 - val_loss: 0.2101 - val_accuracy: 0.6540 ``` and classification report was like this, ``` precision recall f1-score support 0.0 0.55 0.76 0.64 864 1.0 0.78 0.56 0.65 1263 accuracy 0.65 2127 macro avg 0.66 0.66 0.65 2127 weighted avg 0.68 0.65 0.65 2127 ``` On the second test, when I used splitted datasets, the model accuracy was pretty good, but test accuracy was below average... ``` loss: 0.1558 - accuracy: 0.7875 - val_loss: 0.5014 - val_accuracy: 0.5026 ``` Classification report, ``` precision recall f1-score support 0.0 0.47 0.80 0.59 965 1.0 0.60 0.26 0.36 1162 accuracy 0.50 2127 macro avg 0.54 0.53 0.48 2127 weighted avg 0.54 0.50 0.47 2127 ``` I understand the second model is overfitting, that's why I'm getting poor test results... But in the real world the structure is gonna be kind of same... Like I'll have to use the model on fresh data while training it on older data... ( The rows are sorted or indexed by datetime in the datasets ) I'm kinda new to machine learning. So lil bit confused on this... Does the second test mean the model is not gonna perform that well like the first test in real world? Or what I'm doing wrong?
Low validation accuracy when not using shuffled datasets
CC BY-SA 4.0
null
2022-07-09T16:38:39.943
2022-07-10T01:12:01.977
null
null
137896
[ "machine-learning", "deep-learning", "neural-network" ]
Normally overffited models will generalize poorly, because their parameters were estimated to follow the patterns found on your train set only. But why ? The parameters/weights are estimated using gradient, if you don't know what it is [3Blue1Brown](https://www.youtube.com/watch?v=IHZwWFHWa-w) has a great video about that, think of it as a compass that points to the direction where your loss function converges to 0. That direction is improved with the different patterns that your model finds in the data. Although, you didn't shuffle your data, so some repeated patterns can show up in a sequence (e.g the first 100 images of the training set are cats) and the gradient will follow only those patterns until it finds a completely different pattern and realize: "Wait I'm in the wrong direction!" - So now it needs to recalculate the weights to follow the new pattern and that can happen nearly the end of the training loop, meaning that your model will not have time to learn the new pattern. Or it can't even find new patterns because those were only in the data in validation set when you truncate your inputs (x_train, x_test). So your model will be very good to classify your training set data but only that data. When you shuffle you show patterns in a random way so the model can update their weights (learning those patterns) in time and slowly converge to a minimum on your loss function. There are other cases where model can overfit, a small dataset is one of those cases. If it was confuse to understand, tell me I'll try to explain in a better way...
Very low accuracy of new data compared to validation data
You are experiencing Data Leakage. In a comment, you explained that you shuffle your data before splitting into train/validation. For each validation point, you likely are showing the model data that is nearby temporally both before and after the time of the validation point. This is information that the model can not possibly hope to have when running in real time. To alleviate this, I would keep the data in the correct time order, and instead take the validation data as a contiguous chunk of time. If you want to be the most careful, you could throw data away in a small window around the validation data, so that the training data won’t contain information about the edges of the validation data. This appears to be a good resource, but I have not read it too thoroughly: [https://www.kaggle.com/dansbecker/data-leakage](https://www.kaggle.com/dansbecker/data-leakage)
112592
1
112593
null
0
64
In a regression problem that I'm currently working on, it seems that my model is doing well on higher values but significantly worse on lower values (e.g. values from 100,000,000 to 105,000,000 are being accurately predicted/ having lower error scores while values from 1,000,000 to 5,000,000 don't). One approach that I am planning to test out is using multiple regression models, with one trained on the lower values and one on the higher values. I've seen scikit-learn's VotingRegressor, but if I understand correctly it seems that in predicting the value it'll only average the result from the estimators. Other than using average values from the estimators, are there any other approaches to do the voting from multiple regression models? Since classification problems might use soft/hard voting, wondering if there are alternative approaches in regression problems as well.
Voting Regression models, other approaches than averaging the results from each estimators
CC BY-SA 4.0
null
2022-07-12T13:49:35.740
2022-07-12T14:46:14.030
null
null
137790
[ "python", "scikit-learn", "regression", "ensemble-modeling" ]
You may try a stacking or blending approach (such as a `StackingRegressor()` in the recent sklearn versions), featuring a simple meta-model taking your initial models' predictions as features.
python: Are there are some class like Voting classifier for three or four regression model
There are many ways in which you could create an ensemble from your base models. Some resources to take a look at are the following: Firstly, I would point your attention towards [this question](https://stats.stackexchange.com/questions/139042/ensemble-of-different-kinds-of-regressors-using-scikit-learn-or-any-other-pytho), that has a lot of answers and interesting ideas. Regarding implementation: I have used in the past the [brew](https://github.com/viisar/brew) module that has extensive implementations of different stacking-blending techniques etc, while using different combination rules and diversity metrics. It is also compatible with sklearn models. Another interesting project is [xcessiv](https://github.com/reiinakano/xcessiv), which I haven't used myself but provides a helpful gui for managing your ensemble's details. Finally, regarding theoretical details, I would suggest you take a look into [this survey](http://www.leg.ufpr.br/~eder/Artigos/Wather/Moreira_2007.pdf) that focuses on ensembles in regression tasks.
112611
1
112687
null
2
167
As mentioned in the question, it is easy to interpret the meaning of features in algorithms like simple decision trees. But in the case of ensemble methods that are known to average/modify features, are these results still sensible interpretable/usable to argument about the feature(s)?
Are feature importances of ensemble methods sensible interpretable?
CC BY-SA 4.0
null
2022-07-13T13:37:55.597
2022-07-16T11:42:00.940
2022-07-15T14:58:35.010
86339
138035
[ "machine-learning", "random-forest", "decision-trees", "xgboost", "ensemble-modeling" ]
During recent years, the most successful feature attribution method from has been the [SHAP](https://github.com/slundberg/shap) values. > SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allocation with local explanations using the classic Shapley values from game theory and their related extensions One of the most famous implementations is [Tree Explainer](https://shap-lrjball.readthedocs.io/en/docs_update/generated/shap.TreeExplainer.html) > Tree SHAP is a fast and exact method to estimate SHAP values for tree models and ensembles of trees The paper originally appeared in [Nature](https://www.nature.com/articles/s42256-019-0138-9) ( here in Arxiv): Explainable AI for Trees: From Local Explanations to Global Understanding. In case you want to extend and see more related literature you can have a look at the appendix "Methods 5 Previous Global Explanation Methods for Tree". They provide the previous state of the art of feature-relevance methods. The exact algorithm that answers your question is in the Method 10, it explains how TreeShap is computed.
Why are ensembles so unreasonably effective
For a specific model you feed it data, choose the features, choose hyperparameters etcetera. Compared to the reality it makes a three types of mistakes: - Bias (due to too low model complexity, a sampling bias in your data) - Variance (due to noise in your data, overfitting of your data) - Randomness of the reality you are trying to predict (or lack of predictive features in your dataset) Ensembles average out a number of these models. The bias due to sampling bias will not be fixed for obvious reasons, it can fix some of the model complexity bias, however the variance mistakes that are made are very different over your different models. Especially low correlated models make very different mistakes in this areas, certain models perform well in certain parts of your feature space. By averaging out these models you reduce this variance quite a bit. This is why ensembles shine.
112640
1
112647
null
2
85
Suppose I have a list of, say, 100 countries, as well as their respective historical sovereign credit ratings as such ``` 2020 2019 ... 2000 Country 1 AAA A- ... BBB Country 2 CCC B- ... BBB ................................... ``` I am interested in clustering these based on their historical credit ratings. For instance, I expect two countries that have consistently rated highly over the years (say ratings between A- and AAA) would cluster together, countries with varying degrees of ratings (from low to high) over the years 2000 and 2020 would also cluster together, and countries that have consistently rated poorly also. I have looked at a few suggestions online for clustering categorical data based on multiple variables, but usually they are not for ordered categorical data. For instance, the dissimilarity matrix generated by Kmodes, is predicated on the two categories being identical. However, in ordered categorical data, a rating of BBB+ and BBB are incredibly close to one another and thus must be clustered together. What would be a good solution to such clustering exercise for the countries given the example above?
Clustering ordered categorical data
CC BY-SA 4.0
null
2022-07-14T11:31:56.617
2022-07-14T14:15:43.167
null
null
138089
[ "python", "clustering", "unsupervised-learning", "categorical-data" ]
You can have categories that contain a logic that could be a numeric value and it seems to be your case. That's why you should consider those ratings from a mathematical point of view and assign a numerical scale that would be comprehensive to your algorithm. For instance: ``` AAA+ => 1 AAA => 2 AAA- => 3 AA+ => 4 AA => 5 AA- => 6 ``` etc. In this way, countries rated AAA+ in 2022 and AA- in 2021 should be close to countries rated AAA in 2022 and AA in 2021 because [1,6] are similar to [2,5] from a numeric point of view. However, if you consider those rating as separated categories like this: ``` AAA+ => col_AAA+= True, col_AAA=False, col_AAA-=False, col_AA+=False,... AAA => col_AAA+= False, col_AAA=True, col_AAA-=False, col_AA+=False,... ``` etc. You would have more data to deal with and the algorithm would not see any ranking between columns, and hence would not make good clustering. I recommend using numeric values for any feature that can have a scale and use categories just in case of independent ones (for instance, sea_access=Yes/No, or opec_member=Yes/No). I some case, you can also implement an intermediate solution like this one: ``` AAA+ => col_A= 1, col_B=0, col_C-=0, ... AAA => col_A= 2, col_B=0, col_C-=0, ... ... BBB+ => col_A= 0, col_B=1, col_C-=0, ... BBB => col_A= 0, col_B=2, col_C=0, ... ``` etc. It could be interesting if you want to make a clear difference between rating groups (ex: going from AAA to A+ is not as bad as going from A- to BBB+). Note: clustering could be difficult if you consider too many years, even with algorithms like UMAP or t-SNE. That's why a good option is to consider a few years for a beginning or simplify with smoothing algorithms.
Clustering on categorical attributes
Here is nice implementation of mixed type data in R- [https://dpmartin42.github.io/posts/r/cluster-mixed-types](https://dpmartin42.github.io/posts/r/cluster-mixed-types) This question right here- [K-Means clustering for mixed numeric and categorical data](https://datascience.stackexchange.com/questions/22/k-means-clustering-for-mixed-numeric-and-categorical-data/24#24) and a Discussion Thread of Kaggle- [https://www.kaggle.com/general/19741](https://www.kaggle.com/general/19741) There are ways, to either map your categorical data to numeric type and then you can go about the business as usual, or choose similarity measures which works for categorical data type, in this case you have options to choose from counting frequencies etc .