Id
stringlengths
2
6
PostTypeId
stringclasses
1 value
AcceptedAnswerId
stringlengths
2
6
ParentId
stringclasses
0 values
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
34
27.1k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
1 value
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
2
6
OwnerUserId
stringlengths
2
6
Tags
listlengths
1
5
Answer
stringlengths
32
27.2k
SimilarQuestion
stringlengths
15
150
SimilarQuestionAnswer
stringlengths
44
22.3k
117683
1
117698
null
0
54
I have constructed a natural language processing (NLP) model with the aim of identifying technology keywords within text. The model is trained on a large dataset that contains over 400,000 phrases and has been annotated with approximately 1000 technology keywords, of which only the keywords that I provided in the dataset, can be identified. The annotations within the training dataset include specific locations of technology keywords in the phrases, like for example in the below example, the technology keyword "php" is located at positions 0-3 and 43-46. ``` TainingData = [ ('php search upperlower case mix word string php , regex , search , pregmatch , strreplace', {'entities': [[0, 3, 'php'], [43, 46, 'php']]}), ('create access global variables groovy groovy', {'entities': [[31, 37, 'groovy'], [38, 44, 'groovy']]}), ('asp.net mvc 2.0 application fail error parameterless constructor define object asp.net , asp.netmvc , asp.netmvc2', {'entities': [[0, 7, 'asp.net'], [79, 86, 'asp.net'], [89, 99, 'asp.netmvc']]}), ('question regular servlets within gwt work dev mode work deployment tomcat java , gwt , servlets , fileupload', {'entities': [[74, 78, 'java']]}), ('display type ive create use create type postgresql database , postgresql , type , export', {'entities': [[40, 50, 'postgresql'], [62, 72, 'postgresql']]}), ('compare date specific one datetime string twig php , twig', {'entities': [[42, 46, 'twig'], [47, 50, 'php'], [53, 57, 'twig']]}), ('ie display simple js alert javascript , internetexplorer7 , parallel', {'entities': [[27, 37, 'javascript']]}), ('differences basehttpserver wsgiref.simple_server python , basehttpserver , wsgiref', {'entities': [[49, 55, 'python']]}) ] ``` An alternative very simple approach that I considered is to manually search for the technology keywords within the text, by using a set of predefined keywords and checking the text in a loop. However, I am wondering of which approach would be more efficient and effective. Given the large dataset and a vast number of keywords, I wonder which of these two methods would yield the best results and would be more appropriate.
NLP vs Keyword-Search. which one is the best?
CC BY-SA 4.0
null
2023-01-11T02:15:48.517
2023-01-11T14:58:21.087
null
null
144729
[ "nlp", "predictive-modeling", "text-mining", "text-classification", "named-entity-recognition" ]
To add to @noe answer, you will face more issues when working with real data. Here are some examples: - You might find sticky words. Ex: python,django,fastapi. - You might find an alternative word. Ex: Python3.7. - Sentences might be longer in real life than your training data. It will depend on how you prepare the sentences, the side of data you're extracting skills from, and your resources. For instance, do you have GPU or CPU, RAM ... etc.
What is the difference between NLP and text mining?
I agree with Sean's answer. [NLP](https://en.wikipedia.org/wiki/Natural_language_processing) and [text mining](https://en.wikipedia.org/wiki/Text_mining) are usually used for different goals. Also, there is indeed an overlap and both definitions are vogue. Other than the difference in goal, there is a difference in methods. Text mining techniques are usually shallow and do not consider the text structure. Usually, text mining will use bag-of-words, n-grams and possibly stemming over that. In NLP methods usually involve the text structure. You can find there sentence splitting, part-of-speech tagging and parse tree construction. Also, NLP methods provide several techniques to capture context and meaning from text. Typical text mining method will consider the following sentences to indicate happiness while typical NLP methods detect that they are not - I am not happy - I will be happy when it will rain - If it will rain, I'll be happy. - She asked whether I am happy - Are you happy?
117690
1
117696
null
0
48
In DBSCAN: - A core point is a point which has at least "MinPts" points inside its Epsilon radius. - A border point is a point inside the Epsilon radius of a core point, but it has a number of points inside its own Epsilon radius inferior to "MinPts" so it isn't a core point. - A noise point is a point which is neither a core point, nor a border point. Given these definitions, I conclude that the distance between a Noise Point and Border Point can be less than Epsilon: a noise point could be a noise point because it is inside the Epsilon radius of a border point, but it doesn't have enough neighbors and at the same time it's not in the neighborhood of a core point. Is this reasoning correct? Thank you!
In DBSCAN, can the distance between a Noise Point and Border Point be less than Epsilon?
CC BY-SA 4.0
null
2023-01-11T12:31:08.940
2023-01-11T14:54:25.947
null
null
142568
[ "classification", "clustering", "algorithms", "distance", "dbscan" ]
Yes you are correct. I just gave an example check this image [](https://i.stack.imgur.com/nqCjl.png)
Distance between any two points after DBSCAN
I think that you miss the second parameter: `min_samples=200` DBSCAN not only detects the outliers, but it mainly detects so-called noise. When we do clustering via DBSCAN, we do not look only at distance `eps=0.6`, but we check if the cluster-candidate is populated with over than `min_samples=200` objects. You don't see "outliers", but you see all the objects that do not perform the cluster. That is why object can have a neighbor in a ball with radius 0.6, but it can be considered as -1 "noise" cluster.
117695
1
117697
null
0
27
Two classifiers need to be trained simultaneously, and I have three losses, as shown in the figure. Classifiers 1 and 2 will be updated by losses 1 and 2. Furthermore, loss 3 should update the two classifiers concurrently. Here's what I did ``` loss1.backward() loss2.backward() loss2.backward() ``` Is this correct? [](https://i.stack.imgur.com/PQWtl.png)
What is the best way to use three different losses on two classifiers?
CC BY-SA 4.0
null
2023-01-11T13:30:24.333
2023-01-11T14:54:28.077
2023-01-11T14:49:10.617
141608
141608
[ "machine-learning", "pytorch", "optimization" ]
You need to create another loss $loss_T$ that combines losses $loss_1$, $loss_2$ and $loss_3$, and optimize only $loss_T$. The typical approach is to define $loss_T = \lambda_1 \cdot loss_1 + \lambda_2 \cdot loss_2 + \lambda_3 \cdot loss_3$, where $\lambda_1$, $\lambda_2$ and $\lambda_3$ are new hyper parameters (note that you can remove $\lambda_3$ to simplify and still keep the same expressivity). To give values to $\lambda_i$ you may simply assign $\lambda_i = 1$ or you can grid search to obtain better values. Normally, you want $\lambda_i$ to compensate for the differences in the gradient norms of $loss_i$, to avoid one of them overshadowing the rest. For that, I suggest you monitor the norm of the gradients of $loss_i$ in a training rehearsal to understand the value ranges that are appropriate for each $\lambda_i$.
What loss function should I use if I have been working on a classification problem which involves both multi-label and multi-class labels?
The labels of data are not mutually exclusive so you can't say this is a one vs. all problem, because more than one entry may be one in the output vector. Moreover, if in the seen there should be an apple or pear this can be considered as an exhaustive problem which means one of them should happen for each input. My opinion is that for this problem you don't have to make a new cost function. For the mutually exclusive part, as you have truly stated that, and for the second part of your vector, the well known `cross-entropy` cost function will perform fine. I guess the problem is something else. For problems with mutually exclusive classes, we use [soft max layer](https://datascience.stackexchange.com/q/25315/28175) as the last layer for neural nets while for cases which classes are not mutually exclusive, you can use `sigmoid` as the activation function. In your case that you have combination of them I suggest you an alternative approach: Change your mutually exclusive part to a binary output, means if the corresponding entry is less than half, you can understand that it is e.g. an apple otherwise, it is the other class. and for the rest, just keep the output vector as it is. Finally use `sigmoid` activation function as the last layer if you are using neural nets.
117754
1
117758
null
0
14
I am calculating retention for 3 categories and then total, and I am trying to double check my total, but my check formula isn't working. I am comparing the last 14 days (let's call it Period 1) to the 14 days before that (let's call it Period 2). I am using the following formula: ![](https://latex.codecogs.com/svg.image?%5Cfrac%7B%5Cleft(Users%5C:in%5C:Period%5C:1%5C:-New%5C:Users%5C:Period%5C:1%5Cright)%7D%7BUsers%5C:in%5C:Period%5C:2%7D) I did it for each category and in total. - Category A: Users Period 1 = 9 New Users Period 1 = 4 Users Period 2 = 5 Category A Retention = 100% (Total users = 9) - Category B: Users Period 1 = 12 New Users Period 1 = 5 Users Period 2 = 10 Category B Retention = 70% (Total users = 15) - Category C: Users Period 1 = 2 New Users Period 1 = 2 Users Period 2 = 0 Category C Retention = NA because no users in Period 2 (Total users = 2) Now to calculate the total I can do it directly doing: - Users Period 1 = 23 - New Users Period 1 = 11 - Users Period 2 = 15 - Total Retention = 80% However, I would like to calculate that also using the categories to double check the total. What I am currently doing is: Note: I didn't include category C when calculating the weights because retention is not applicable because there were no users in period 2. So the weights are calculated as Category Users/(CatA+CatB Total Users) ![](https://latex.codecogs.com/svg.image?%5Cleft(Cat%5C:A%5C:Retention%5Ctimes&space;%5C:%5C:Cat%5C:A%5C:Weight%5Cright)%5C:+%5C:%5Cleft(Cat%5C:B%5C:Retention%5Ctimes&space;%5C:%5C:%5C:Cat%5C:B%5C:Weight%5Cright)=.784) Why is it not equal to .8? What am I doing wrong in this second method of calculating the total retention?
Total Retention Rate Calculated from Categories
CC BY-SA 4.0
null
2023-01-13T18:44:47.853
2023-01-13T20:12:24.740
null
null
144814
[ "dataset", "statistics", "data", "data-analysis", "performance" ]
You are using the incorrect weights when combining the two categories. Instead of using the total users you should instead use the number of users in the second period to weigh the categories. This then also automatically removes category C from the equation since this category contains zero users in the second period. $1 * (5 / 15) + 0.7 * (10 / 15) = 0.8$
How to account for reduced student capacity when calculating program retention?
### Rationale Some of the terms are a little vague, particularly what you refer to as `eligible students` and `returned students`. I'll set some variables for clarity, but tell me if I defined them incorrectly. I assume them to mean: `eligible students` $ = A $ being the set of all students in the after-school program 2019-2020 `returning students` $ = A\cap S $ where $S$ is the set of all of summer camp 2020 students Now we can define `retention rate` to be $\frac{|A\cap S|}{|A|}$. I base this on the [US government definition of university student retention rate](https://fafsa.ed.gov/help/fotw91n.htm), which I think is pretty similar, but please correct me if it's not. --- The reasons as to why you might have $|A\cap S|<|A|$ are irrelevant. --- ### Example Let's say you have $|A|=100$ so that, in your situation, $|S|=90$. For simplicity, let's also say that there are no newcomers to the mix. The retention is, quite literally, how many students you retained. If you retained $90$ students, then the retention rate is $90/100$. Even though $|S|=|A|-10$, the number of students you retained $|A\cap S|$ as a percentage of the number of students in the original program $|A|$ is still $90\%$. It wouldn't make sense to normalize this to $100\%$.
117772
1
117777
null
1
196
I want to do cross validation. So should i split my data into train and test with sklearn train_test_split and use cross validation like this: ``` cross_validate(model, X_train, y_train, scoring = roc_auc, cv = 5, n_jobs = -1) ``` Or should i make a function and split the data inside each fold differently like this: ``` def cross_val_evaluation(model): kf = KFold(n_splits=5) for train_index, test_index in kf.split(X,y): X_train, X_test = X.iloc[train_index, :], X.iloc[test_index, :] y_train, y_test = y.iloc[train_index], y.iloc[test_index] y_pred_proba = model.predict_proba(X_test) cross_val_roc_auc_score = round(roc_auc_score(y_test,y_pred_proba[:,1]),3) cross_roc_auc_score_lst_test.append(cross_val_roc_auc_score_test) print(f'AUC Score Cross_Val Proba Test: {round(np.mean(cross_roc_auc_score_lst_test),3)}') ```
Train test split before or train test split inside cross validation
CC BY-SA 4.0
null
2023-01-15T06:10:53.317
2023-01-15T15:39:29.200
null
null
141429
[ "machine-learning", "python", "cross-validation" ]
First, split the dataset into train and test dataset. Then, cross-validate on the train dataset. You should use the scikit-learn functions. Implementing the functionality yourself may introduce bugs.
nested cross validation vs. train-test split
> You get to use the entire data you have as part the training process (so the inner CV would essentially get to see all the data at some point). The model performance estimate you get could be more stable (in the sense that it is not based on a single run using the test data, but on multiple runs. You've covered the main benefits. However, it is important to point out that more stable specifically includes the benefit of not being dependent on how you split your data. With hold-out validation it may be that the distribution of your test set differs from your training set thereby violating the key assumption of having training and test data coming from the same distribution in order to obtain an unbiased estimate of the model's performance. This is more likely to be a problem when the amount of data is limited. Therefore, when you have a very large dataset (and your model takes a long time to train) it is common to apply holdout validation (that is, k-fold CV for validation and holdout CV for testing). With models which are very costly to train (such as Neural Nets often are) it is common to apply holdout validation even to only medium-sized datasets (e.g. where medium-sized refers to not more than $200k$ datapoints as a ballpark figure).
117792
1
117801
null
0
91
Context: I am a pure mathematician trying to understand machine learning. I am studying it from various sources, now focusing on NLP and word embeddings. My question: What is the weight matrix for a neural network? How is it calculated, in layman terms? (Or even in more complicated terms, I will appreciate any good resources for beginners.) Motivation: I was reading aneesh joshi´s answer to [this question](https://datascience.stackexchange.com/questions/19869/what-is-the-feature-matrix-in-word2vec?newreg=dfaa8b2985bc48fdb801d8eca1380b36) and I dont understand it. I think my problem is I dont understand sufficiently, how a neural network is trained. I am new on DataScience stackexchange, so feel free to give me tips to improve my question. (I am usually posting on Math.Stackexchange). Thank you very much!
How is weight matrix calculated in a neural network?
CC BY-SA 4.0
null
2023-01-16T13:47:30.427
2023-01-16T19:07:51.730
null
null
144882
[ "neural-network", "word-embeddings", "word2vec", "one-hot-encoding", "linear-algebra" ]
So I think there are a few concepts being mixed up in your question, I will do my best to address them one by one. The "weight matrix" you refer to (if it's the same one as Aneesh Joshi's post) is related to "attention", a concept in Deep Learning that, in simple terms, changes the priority given to different parts of the input (quite literally like human attention) depending on how it is configured. This is not necessarily a the weight of a single neuron in a neural network, so I will come back to this once I have cleared up the other concepts. To understand Transformers and NLP models, we should go back to just basic neural networks, and to understand basic neural networks we should understand what a neuron is, how it works, and how it is trained. History break: basically for a long time (in particular popularity since the 1800s) a lot of modelling was done using linear models and linear regression (you see the effects of this in classical statistics/econometrics). However, as the name implies, this only captures linear relationships. There were two questions that led to resolving this: (1) how do we capture non-linear relationships? (2) how do we do so automatically in an algorithmic manner? For the first one, many approaches existed, such as Logistic regression. You might notice that in logistic regression, there is still a linear equation that is being estimated, just not as the final output. This disconnect between the input linear equation and output non-linear equation is now known as having an input Linear cell with an Activation function. In logistic regression, the Activation function is the logistic function, whereas in linear regression it is just the Identity function. In case it is still not clear, imagine this: Input -> Linear Function -> Activation Function -> Output. Putting that entire sequence together gives you the Perceptron (first introduced in the 1940s). Methods of optimizing this were done via gradient descent and various algorithms. Gradient descent is probably the most important to keep in mind and helps us answer the second question. Essentially what you are trying to do in any form of automated model development, is you have some linear equation (i.e. $y = \beta\times w + \beta_{0}$), which you feed an input, pass through an activation function (i.e. Identity, sigmoid, ReLu, tanh, etc), then get an output. And you want that output to match a value that you already know (in statistics imagine $y$ and $y_{hat}$. In order to do that (at least in the case of a continuous target class) you need to know how far off is your prediction from the true value. You do this by taking the difference. However the difference can be positive/negative, so usually we have some way of making this semi-definite positive by either squaring (Mean Squared Error) or taking the absolute value (Mean Absolute Error). This is known as our loss function. Essentially we would have developed a good estimator/predictor/model if our loss is 0 (which means that our prediction matches our target value). Linear regression solves this using Maximum Likelihood Estimation. Gradient descent effectively is an iterative algorithm that does the following. First we take the derivative of our loss function with respect to one of our model weights (i.e. w) and we apply something similar to the update that happens in the Newton-Raphson method, namely $w_{new} = w_{old} - step\times\frac{dL}{dw}$ where step is a step-size (either fixed or changing). A stopping condition is typically if your change from the new weight to the old weight is sufficiently small you stop updating as you have reached a minimum. In a convex scenario this would also be your global minimum, however classically you have no way of knowing this, which is why multiple algorithms use some approaches like random starts (multiple starting points randomly generated) to try and avoid getting stuck in local minima (aka a loss value that is low but not the lowest it could be). Ok so if you have read that take a quick stretch and process it since I am not entirely sure of the reader's background so it may have been a lot to process or fairly straightforward. So far we covered how to capture non-linear relationships and how to do it in an automated way. So does a single neuron train like that? Yes. Do a collection of neurons train like that? No. Recall that a neuron is just a linear function with an activation function on top that performs some form of gradient descent. However a neural network is a chain of these, sometimes interconnecting, etc. So then how do we get a derivative of the loss which is at the end of a network to the very beginning. Through a technique that took off in the mid-1980s the technique called backpropagation (as a rediscovery of techniques dating back to the 1960s). Essentially we take partial derivatives, and then through an application of the chain rule are easily able to propagate various portions of the loss backwards, updating our weights along the way. This is all done in a single pass of training, and thankfully, automatically. The classical approach is to feed in your entire dataset THEN take the loss gradient, known as Gradient Descent. Or you can feed only a single data point before updating, known as Stochastic Gradient Descent. There is, of course a middle ground, thanks to GPUs, taking a batch of points known as Batch learning. A question that might have come up is, is the derivative of our loss function just linear? No, because the activation can be non-linear, so essentially you are taking the derivatives of these non-linear functions, and that can be computationally expensive. The popular choice today is ReLU (Rectified Linear Unit) which, for some linear output $y$ is basically a max function saying $output = max(y,0)$, that's it. Things like weight initialization make more of an impact as performance across different non-linear activation functions are pretty comparable. Ok so we covered how a neuron works and how a neural network optimizes its weights in a single pass. A side note is you will often hear "train/validation/test" set, which are basically splits of your dataset. You split your dataset prior into a training subset, for, well, training, which is where you modify the weights through the process I described above for the entire training set or each data point (or batches of data points if you are using GPUs/batches in Deep Learning). Usually your data might need to be transformed depending on the functions you are using, and the mistake most practitioners make is pre-processing their data on the entire dataset, whereas the statistically correct way of doing so is on the training set only, and extrapolating to the validation/test set using that (since in the real world you may not have all the information). The validation set is there for you to test out different hyper-parameters (like step-size above, otherwise known as learning rate) or just compare different models. The test set is the final set that you use to truly see the quality of your model, after you have finished optimizing/training above. Now we can finally get to your question on attention. As I described a basic neural network, you may have noticed that it receives an input once, does a run through, and then optimizes. Well what if we want to get inputs that require some processing? Like images? Well this is where different architectures come up (and you can read this all over the web, I recommend D2L.ai or FastAI's free course), and for images the common one are Convolutional Neural Networks, which were useful in capturing reoccurring patterns and spatial locality. Sometimes we might want more than one input, aka a previous input influencing how we process this next input, this is where temporal based architectures come in like Recurrent Neural Networks, and this was what was initially used for languages, but chaining a bunch of neurons takes a while to process since we can't parallelize the operations (which is where the speed of neural network training comes from). Plus you would have to have some way of pre-processing your language input, such as converting everything to lowercase, removing non-useful words, tokenizing different parts of words, etc, depending on the task. There was quite a lot to deal with. Up until a few years ago when Attention based models came out called Transformers, and they have pretty much influenced all aspects of Deep Learning by allowing parallelization but also capturing interactions between inputs. A foundational approach is to pre-process inputs using an attention matrix (as you have mentioned). The attention matrix has a bit of complexity in terms of the math (Neuromatch's course covers this well), but to simplify it, it is effectively the shared context between two inputs (one denoted by the columns, the other the rows), like two sentences. The way this is trained (and it depends on the model) is generally by taking an input and converting it into a numerical representation (otherwise known as an embedding) and outer-multiplying these two vectors to produce a symmetric matrix, which obviously has the strongest parts on the main diagonal. The idea here is to then zero out the main diagonal, and then train the network to try and use the remaining cells to fill this main diagonal in (you can once again do this via a neural network and the process described above). Then you can apply this mechanism with two different inputs. In a translation task for example, you would have a sentence in language A and another in B, and their matrix would highlight the shared context, which may not be symmetric depending on the word order/structure, etc. Transformers are not like Recurrent Neural Networks in that they take an input at once and then give an output, but compromises exist and pre-training models is also a thing. There is a lot I haven't touched upon, but the resources I mentioned should be a useful starting point, and I wish you luck in your Data Science journey!
Weights in neural network
In parametric models such as linear regression, logistic regression and multi-layers perceptrons, weights are updated with regards to the "difference" between the output of your model and the real label. More precisely, weights are updated using the gradient descent / backpropagation procedure. It is composed of two parts : the forward pass and the backward pass. For a given observation (or a set of observations), the forward pass is about feeding the model with observations and output a result a. This output "a" is then compared with the real value, the label y. Using some cost function metrics (such as Absolute Error or Square Error for regression purposes, cross-entropy for classification purposes...), we can then compute j(y,a) which is the error between the output and the real value. [](https://i.stack.imgur.com/3gxBy.png) We can now run the backward pass which is about computing the derivative of the cost function with regards to any weight / bias coefficient in the logistic regression / neural network. We can the update coefficients such as : [](https://i.stack.imgur.com/g9H2F.png) [](https://i.stack.imgur.com/9wP3J.png) Where alpha is the learning rate. So to answer your question, weights are not updated until they reach the expected value. We are just trying to reach the minimum in cost function surface by running gradient descent procedure.
117796
1
117811
null
0
146
Let $\mathcal{X}$ be a finite domain and $k$ a number such that $k\leq|\mathcal{X}|$. Consider the hypothesis class $\mathcal{H}:=\big\{h:|\{\mathbf{x}\in\mathcal{X}:h(\mathbf{x})=1\}|=k\bigr\}$; that is, the class of hypotheses $h:\mathcal{X}\to\{0,1\}$ that assign label $1$ to exactly $k$ elements of $\mathcal{X}$. Question: What is the VC-dimension of $\mathcal{H}$? (Note that this is Exercise 6.2 (1) in the book "Understanding Machine Learning: From Theory to Algorithms" from Shalev-Shwartz & Ben-David.) The claim is that $\mathrm{VCdim}(\mathcal{H})=\min\{k,|\mathcal{X}|-k\}$. The solution that was provided to me proceeds as follows: - Show that $\mathrm{VCdim}(\mathcal{H})\leq\min\{k,|\mathcal{X}|-k\}$. - Show that $\mathrm{VCdim}(\mathcal{H})\geq\min\{k,|\mathcal{X}|-k\}$. I'm already struggling with the first part. Let $C\subseteq\mathcal{X}$ be a set of size $k+1$. Then, $C$ is not shattered by $\mathcal{H}$ as there is no $h\in\mathcal{H}$ satisfying $h(\mathbf{x})=1$ for all $\mathbf{x}\in C$ $\Rightarrow\;\mathrm{VCdim}(\mathcal{H})\leq k$. On the other hand, if $C\subseteq\mathcal{X}$ is of size $|\mathcal{X}|-k+1$, then $C$ is not shattered by $\mathcal{H}$ as there is no $h\in\mathcal{H}$ satisfying $h(\mathbf{x})=0$ for all $\mathbf{x}\in C$. Hence, $\mathrm{VCdim}(\mathcal{H})\leq\min\{k,|\mathcal{X}|-k\}$. I don't see why the second step in this line of reasoning is true. For example, consider some domain of cardinality $|\mathcal{X}|=4$ and let $k=3$. Then, $|\mathcal{X}|-k+1=2$. So when I pick any $2$ instances from $\mathcal{X}$ so that $C=\{\mathbf{x}_1,\mathbf{x}_2\}$, why is there no $h\in\mathcal{H}$ satisfying $h(\mathbf{x}_1)=h(\mathbf{x}_2)=0$? In my opinion, as $2<k=3$, this is actually the only possible labeling. I am convinced the general claim is correct but I believe the proof of the upper bound in the form above is incomplete. Or did I completely misunderstand the problem setting?
VC-dimension of the class of hypotheses that assign label $1$ to exactly $k$ points of some finite domain $\mathcal{X}$
CC BY-SA 4.0
null
2023-01-16T16:28:48.130
2023-01-17T20:26:52.330
2023-01-17T20:26:52.330
144893
144893
[ "machine-learning", "binary-classification", "vc-theory" ]
In your example, there is no such $h$. Every $h$ assigns 1 to three elements of the four, so can only assign 0 to one element.
VC dimension of hypothesis space of finite union of intervals
[VC dimension](https://en.wikipedia.org/wiki/Vapnik%E2%80%93Chervonenkis_dimension) is defined for a hypothesis space $H$, e.g. a set of binary classifiers $C \rightarrow \{0, 1\}$. For example, hypothesis space $$H=\{{\Bbb 1}_{x \le \theta}: \theta \in {\Bbb R}\}$$ has VC dimension $1$, because for any $C=\{a<b\}$, it does not contain a classifier that gives $\{a \rightarrow 0, b\rightarrow 1\}$. For example, a classifier from $H$ would be $f(x)={\Bbb 1}_{x \le a}$ that gives $\{a \rightarrow 1, b \rightarrow 0\}$. From C to H As you have illustrated in the comments, we can build a hypothesis space $H$ from $C$ as follows: $$H=\left\{{\Bbb 1}_{x \in C}: C = \left\{\bigcup_{i=1}^{k}(a_i, b_i): a_i, b_i \in {\Bbb R}, a_i < b_i, i=1,2,..,k\right\}\right\}$$ Meaning, each classifier in $H$ is a union of $k$ intervals that labels a point inside the union as $1$ and outside as $0$. VC dimension of this $H$ is $2k$: - For VC $\geq 2k$: Let $A$ be an arbitrary set , and $A \rightarrow \{0, 1\}$ be an arbitrary labeling. By going from minimum to maximum member of $A$, we can cover all adjacent $1$s with one interval, and only need to use another interval when there is a $0$ barrier. Therefore, we need $k$ intervals to cover $k$ isolated regions of $1$s. Furthermore, a set with $2k$ members has at most $k$ isolated $1$s (since to have $k+1$ isolated $1$s there should be $k$ $0$ barriers in-between), and thus, needs at most $k$ intervals. - For VC $< 2k+1$ by contradiction: for any ordered set $A_{2k+1}=\{a_1<...<a_{2k+1}\}$, there is labeling $a_k \rightarrow 1_{\text{k odd}}$, i.e. $\{a_1 \rightarrow 1, a_2 \rightarrow 0,...,a_{2k+1} \rightarrow 1\}$ with $k+1$ isolated $1$s which cannot be covered with $k$ intervals.
117828
1
117833
null
3
1158
I have a database of sentences which are about different topics. I want to automatically classify each sentence with the one or more relevant tags based on the context of the sentence as shown below: Sentence: The area of a circle is pi time the radius squared Expected tags: mathematics, geometry Is there any python library or pre-trained model to generate such tags?
How to automatically classify a sentence or text based on its context?
CC BY-SA 4.0
null
2023-01-17T13:34:14.300
2023-05-14T16:25:41.850
null
null
42439
[ "machine-learning", "classification", "nlp", "text-mining", "text-classification" ]
To my knowledge, there is no such library or pre-trained model. Imho there is an important issue in the task as defined in the question, more exactly in the example: these tags seem natural for a human in the sense that they represent the general topics of the sentence. But technically one could find many other tags which are semantically relevant, for example ellipse, surface, calculation, formula, sciences, knowledge, classes, exercise... The correct granularity (the level of specificity/genericity) of the tags is intuitive for a human, not for a machine. So the task is possible: one can calculate all the semantically more general concepts, for instance with [WordNet](https://wordnet.princeton.edu/), but this would often return too many concepts like in my example. A standard method in this case would be too take the top N according to some measure of semantic similarity. Notes: "classify" is not a good term for this, because classification is a supervised task where classes are known. And it's not really based on the "context" of the sentence ("context of X" usually means "information around X" in NLP), it's based on its content or meaning.
Detect related sentences
[Word Mover’s Distance (WMD)](http://proceedings.mlr.press/v37/kusnerb15.pdf) is an algorithm for finding the distance between pairs of strings. It is based on word embeddings (e.g., word2vec) which encode the semantic meaning of words into dense vectors. > The WMD distance measures the dissimilarity between two text documents as the minimum amount of distance that the embedded words of one document need to "travel" to reach the embedded words of another document. For example: [](https://i.stack.imgur.com/DjJW1.png) Source: ["From Word Embeddings To Document Distances" Paper](http://proceedings.mlr.press/v37/kusnerb15.pdf) The [gensim package](https://github.com/RaRe-Technologies/gensim) has a [WMD implementation](https://radimrehurek.com/gensim/models/keyedvectors.html).
117870
1
117871
null
0
40
[](https://i.stack.imgur.com/fiNG3.png) I got `ValueError: Found array with dim 3. None expected <= 2.` I dont know which array has dim 3? DecisionTreeClassifier cannot take one-hot encoded classes? But from this page it should support? [https://scikit-learn.org/stable/modules/multiclass.html](https://scikit-learn.org/stable/modules/multiclass.html) label is constructed by ``` from sklearn.preprocessing import OneHotEncoder onehot_encoder = OneHotEncoder(sparse=False) label = onehot_encoder.fit_transform(y.values.reshape(-1,1)) ``` where y is like [](https://i.stack.imgur.com/oYhjO.png)
DecisionTreeClassifier cannot take one-hot encoded classes?
CC BY-SA 4.0
null
2023-01-18T21:17:41.307
2023-01-18T22:31:36.110
2023-01-18T21:25:14.497
130605
130605
[ "scikit-learn", "decision-trees", "multiclass-classification", "one-hot-encoding" ]
You need to integer encode your labels instead of one-hot encoding them. [1, 0, 0] -> 0 [0, 1, 0] -> 1 [0, 0, 1] -> 2 so that the labels for multiclass classification (with K classes) that you provide to sklearn are just integers in the set $\{0,1,...,K-1 \}$
label encoding or one-hot encoding or none when using decision tree?
> If we are using label encoder we would only need to convert gender however if that maps male = 0, female = 1 wouldn't the machine treat female > male? You are correct, using label encoder to encode categorical features is wrong in general, for the reason you mention. Note that [scikit documentation](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.LabelEncoder.html) advises against using it with features, it's supposed to be used only with a response variable. In the particular case of a binary variable like "gender" to be used in decision trees, it actually does not matter to use label encoder because the only thing the decision tree algorithm can do is to split the variable into two values: whether the condition is `gender > 0.5` or `gender == female` would give the exact same results. Also note that whether the variable is interpreted as ordinal or not is a matter of implementation. For example in Weka it's possible to specify that a feature is categorical ("nominal"). > and if it ignores ordinality it will ignore level1 < level2. Not necessarily, because in theory it's possible to have features with different types (e.g. some categorical and some numerical). However this may depend on the implementation as well.
117874
1
117888
null
0
35
I have a dataset with 608 inputs and I'm trying to output a single 1 or 0 result. My validation data has 69.12% 0's. When ran, my model always returns 69.12% accuracy, presumably because it's "good enough" given the imbalance in the validation set. I've added an input normalization layer, but it has no effect. I've been told its not a good idea to normalize the validation data. I feel like I'm missing something fundamental here. Advise?
Normalization / Overfitting Issues
CC BY-SA 4.0
null
2023-01-18T22:47:23.807
2023-01-19T10:19:51.907
null
null
144979
[ "neural-network", "overfitting", "normalization" ]
I do not think that the normalization is causing the problem. The fact that you have 69.12% samples of class zero and 69.12% accuracy is very suspicious. This means that if the model classifies every sample as class zero, that would give you an accuracy of 69.12%, as it would correctly classify all of class 0 and incorrectly classify all of class 1. This can happen if there is a really strong overfitting effect. You can try rebuilding the neural net with a different shape or try using more classical binary classification models that do not involve NN. Keep in mind that there is a possibility that your problem is unsolvable with the features you have, experimenting with different models will give you an idea of whether you are doing something wrong or the features do not determine the class.
Why normalization kills my accuracy
There could a skewed power envelope or non-stationary data. As a result, off-the-shelf feature scaling could attenuate the signal. There are feature scaling techniques that tend to work better for audio signals, examples include: RMS level (Root Mean Square Level), Cepstral Mean Subtraction (CMS), RelAtive SpecTrAl (RASTA), kernel filtering, short time gaussianization, stochastic matching, and feature warping. You should make sure you understand your raw data and the assumptions of each feature scaling technique before application. Accuracy-driven machine learning might lead to the wrong conclusions.
117877
1
117903
null
0
32
I am doing a research project as a 2nd author on a paper exploring the properties of a novel algorithm for Optimal Variable Selection where I am running the benchmark Variable Selection Methods. Each of these 3 Benchmarks has been run on the same set of 260,000 synthetic datasets with known properties. But, because this is intended for publication, the lead author has asked that I re-perform my analysis using a different statistical programming language or software package, however, I really only know R, so I told him I can just do it using a different function from a different package in R but with the same random seed beforehand. My code for loading in the data and running my Backward Elimination & Forward Selection Stepwise Regressions (miscellaneous lines of code used for sorting, reformatting, and preprocessing the data has been omitted) is included below with the context being explained via my comments in the code: ``` # Extract all of the individual spreadsheet containing workbooks # in the file folder called 'Data' which is filled # random synthetic observations to run FS, Stepwise, and eventually EER # on to compare the results. There are 260k spreadsheets in this folder. folderpath <- "C:/Users/Spencer/.../datasets folder" paths_list <- list.files(path = folderpath, full.names = T, recursive = T) ## This command reads all of the data in each of the N csv files and stores that ## data for each of them in their own data.table (all data.tables are data.frames) ## and stores all of N data.tables in a list object. CL <- makeCluster(detectCores() - 2L) clusterExport(CL, c('paths_list')) system.time(datasets <- parLapply(cl = CL, X = paths_list, fun = data.table::fread)) system.time(Structural_IVs <- lapply(datasets, function(j) {j[1, -1]})) system.time( True_Regressors <- lapply(Structural_IVs, function(i) { names(i)[i == 1] })) ### Run a Backward Elimination Stepwise Regression ### on each of the 260,000 datasets using the step function. set.seed(11) # for reproducibility system.time( BE.fits <- parLapply(cl = CL, X = datasets, \(X) { full_models <- lm(X$Y ~ ., X) back <- stats::step(full_models, scope = formula(full_models), direction = 'back', trace = FALSE) }) ) # extract the coefficients and their corresponding variable names BE_Coeffs <- lapply(seq_along(BE.fits), function(i) coef(BE.fits[[i]])) # extract the names of all IVs selected by them without their intercepts IVs_Selected_by_BE <- lapply(seq_along(BE.fits), \(i) names(coef(BE.fits[[i]])[-1])) ``` And from there, to make a long story short, I just compared what is returned by True_Regressors for each dataset to whatever is returned by IVs_Selected_by_BE for that same dataset, how I do this will presumably be the same or very similar. What I need is a good suggestion for another package and function to use to do all of this over again as a great big sanity check! For completeness, my complete code used to run all the Forward Selection Stepwise Regressions is included below as well: ``` ### Run a Forward Selection Stepwise Regression ### function on each of the 260,000 datasets. set.seed(11) # for reproducibility system.time( FS.fits <- parLapply(cl = CL, X = datasets, \(X) { nulls <- lm(X$Y ~ 1, X) full_models <- lm(X$Y ~ ., X) forward <- stats::step(object = nulls, direction = 'forward', scope = formula(full_models), trace = FALSE) }) ) # extract the coefficients and their corresponding variable names FS_Coeffs <- lapply(seq_along(FS.fits), function(i) coef(FS.fits[[i]])) # assign all regressors selected by Forward Stepwise Regression, # not including the Intercepts, to IVs_Selected_by_FS IVs_Selected_by_FS <- lapply(seq_along(FS.fits), \(i) names(coef(FS.fits[[i]])[-1])) ``` I have already attempted to do this using the ols_step_backward_aic function from the olsrr package, but I cannot get it to run for the life and me and it is continuously throwing up errors that don't make much sense. So, now I am looking for an alternative alternative way of running Stepwise Regressions in R.
Best package and function in R to use to replicate my (Backward & Forward) Stepwise Regression results I got using step from the stat package
CC BY-SA 4.0
null
2023-01-19T06:25:41.793
2023-01-20T00:28:15.973
null
null
105709
[ "machine-learning", "regression", "r", "feature-selection", "replication" ]
Based on the comment from Oxbowerce, I looked into using the MASS library instead of olsrr, and fortunately, the inherent similarity in the syntax of the MASS::stepAIC() function and the stats::step() functions are quite high actually, at least in this application! Here is the version of my backward elimination stepwise regression function estimated using this new function instead: ``` ### Step 2: Run a Backward Elimination Stepwise Regression ### function on each of the 260,000 datasets. set.seed(11) # for reproducibility system.time( BE.fits <- lapply(X = datasets, \(ds_i) { null_models <- lm(ds_i$Y ~ 1, ds_i) full_models <- lm(ds_i$Y ~ ., data = ds_i) back <- MASS::stepAIC(full_models, direction = 'backward', scope = list(upper = full_models, lower = null_models), trace = FALSE) }) ) ``` As you can see, the only substantive syntactical difference is in the scope argument (because using 'backward' also works for the step function, you don't have to use 'back' as I did here) where it changes from using formula to list and only requiring the upper argument specified to requiring both as my original forward selection stepwise regression estimation code using the stat function does. The neatest part about using this scheme to reproduce my results is that I can reuse the following lines of code which isolate and save the coefficient estimates on and then the names of the Variables selected as is, no adjustments whatsoever are necessary! p.s. One important note for anyone trying to perform a similar reproduction of prior results when you originally used the stats::step() function is that you must use an alternative which also uses the AIC as its selection criterion because by default, the stats::step() function uses this criterion when selecting optimal candidate factors.
Quantifying the performance of Stepwise Regression ran on Monte Carlo generated datasets & comparing them to your method of interest
The first step is to quantify the total number of 'Positives', i.e., the total number of structural factors explaining each dataset, and this is straight-forward since you already have that number stored implicitly in your True_Regressors object. All you will need to do is use: ``` num_of_Positives <- lapply(True_Regressors, function(i) { length(i) }) ``` Then, you need write a function which determines and returns you the names of each of the 'True Positives', that is, the names of all the candidate regressors/independent variables your stepwise regression selected which are actually structural variables. One way to do that would be this: ``` True_Positives <- lapply(seq_along(All_sample_obs), \(k) sum(IVs_Selected_by_BE[[k]] %in% True_Regressors[[k]])) ``` And because the definition of the True Positive Rate is the number of True Positives over the total number of all Positives, you can calculate this using: ``` TPRs = lapply(seq_along(All_sample_obs), \(j) j <- (BE_TPs[[j]]/num_of_Positives[[j]])) ```
117900
1
117910
null
0
93
I have about 3 weeks of 15 minute building electricity power data and curious to know how can I predict an entire days worth of electricity into the future? 96 Future values that makes up 24 hours...my model only outputs/predicts 1 step into the future any tips greatly appreciated how to modify my approach. I can make this LSTM model where I am only using 2 `EPOCHS` just for testing purposes: ``` # https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/ import numpy as np import matplotlib.pyplot as plt import tensorflow as tf from pandas import read_csv from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.layers import LSTM from sklearn.preprocessing import MinMaxScaler from sklearn.metrics import mean_squared_error power = read_csv('https://raw.githubusercontent.com/bbartling/Data/master/months_data.csv', index_col=[0], parse_dates=True) plt.figure(figsize=(15, 7)) plt.plot(power.kW) plt.title('kW 15 Minute Intervals') plt.grid(True) plt.show() def create_dataset(dataset, look_back=1): dataX, dataY = [], [] for i in range(len(dataset)-look_back-1): a = dataset[i:(i+look_back), 0] dataX.append(a) dataY.append(dataset[i + look_back, 0]) return np.array(dataX), np.array(dataY) # normalize the dataset scaler = MinMaxScaler(feature_range=(0, 1)) dataset = scaler.fit_transform(power.values) EPOCHS = 2 # just for testing purposes # split into train and test sets train_size = int(len(dataset) * 0.67) test_size = len(dataset) - train_size train, test = dataset[0:train_size,:], dataset[train_size:len(dataset),:] # reshape into X=t and Y=t+1 look_back = 96 # number of 15 min intervals in one day trainX, trainY = create_dataset(train, look_back) testX, testY = create_dataset(test, look_back) # reshape input to be [samples, time steps, features] trainX = np.reshape(trainX, (trainX.shape[0], 1, trainX.shape[1])) testX = np.reshape(testX, (testX.shape[0], 1, testX.shape[1])) # create and fit the LSTM network model = Sequential() model.add(LSTM(4, input_shape=(1, look_back))) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, epochs=EPOCHS, batch_size=1, verbose=2) # make predictions trainPredict = model.predict(trainX) testPredict = model.predict(testX) # invert predictions trainPredict = scaler.inverse_transform(trainPredict) trainY = scaler.inverse_transform([trainY]) testPredict = scaler.inverse_transform(testPredict) testY = scaler.inverse_transform([testY]) # calculate root mean squared error trainScore = np.sqrt(mean_squared_error(trainY[0], trainPredict[:,0])) print('Train Score: %.2f RMSE' % (trainScore)) testScore = np.sqrt(mean_squared_error(testY[0], testPredict[:,0])) print('Test Score: %.2f RMSE' % (testScore)) # shift train predictions for plotting trainPredictPlot = np.empty_like(dataset) trainPredictPlot[:, :] = np.nan trainPredictPlot[look_back:len(trainPredict)+look_back, :] = trainPredict # shift test predictions for plotting testPredictPlot = np.empty_like(dataset) testPredictPlot[:, :] = np.nan testPredictPlot[len(trainPredict)+(look_back*2)+1:len(dataset)-1, :] = testPredict # plot baseline and predictions plt.plot(scaler.inverse_transform(dataset)) plt.plot(trainPredictPlot) plt.plot(testPredictPlot) plt.show() ``` Where it seems to work: [](https://i.stack.imgur.com/Jfy77.png) [](https://i.stack.imgur.com/OHUKo.png) But how do I predict or forecast into the future an entire day? I have another days worth of data that the model hasnt seen before: ``` testday = read_csv('https://raw.githubusercontent.com/bbartling/Data/master/test_day.csv', index_col=[0], parse_dates=True) testday_scaled = scaler.fit_transform(testday.values) testday_scaled = np.swapaxes(testday_scaled, 0, 1) testday_scaled = testday_scaled[None, ...] testday_predict = model.predict(testday_scaled) testday_predict = scaler.inverse_transform(testday_predict) print(testday_predict[0][0]) ``` For example the print is only 1 step into the future. `39.37902` how do I predict out 24 hours or a whole day ahead into the future?
LSTMs how to forecast out N steps
CC BY-SA 4.0
null
2023-01-19T21:57:24.730
2023-01-21T16:16:15.523
null
null
66386
[ "python", "keras", "tensorflow", "time-series", "lstm" ]
You can use the prediction of the network as an actual value, and create a new input tensor using the previous input with a new column to the right (and removing the first column, since you imposed the network input to be 96-column wide). Then, you can use such a tensor as input to the model, getting another value. If you repeat this process multiple times (autoregressively), you can predict any further you may want.
Multivariate, Multi-step LSTM time series forecast
I know this tutorial, it's a good start for RNNs but it contains a lot of passages and transformations that could have been kept shorter. First, he defines a function called `series_to_supervised()` to process data to be fed into an RNN. Paragraph 3, line 37: ``` reframed = series_to_supervised(scaled, 1, 1) ``` This `reframed` dataframe contains all data, either y columns and all the X variables to make a prediction. In the following code block it is turned into a numpy array at line 2: ``` values = reframed.values ``` Ok, so now all our information is store in `values`. Now it's time to separate it in train and test: ``` train = values[:n_train_hours, :] test = values[n_train_hours:, :] ``` And again, each `train` and `test` is separated in x and y pieces: ``` # split into input and outputs train_X, train_y = train[:, :-1], train[:, -1] test_X, test_y = test[:, :-1], test[:, -1] ``` Line 7-8. This is where the dependent variable is separated from the rest. Knowing it was the last column, it was extracted with index `-1` (i.e. the last element). As I said, this tutorial is a good start to learn time series prediction with RNNs. However, I find that sometimes he tried to simplify the steps so much... that he ended up with some messy parts. All those objects: `reframed`, `values`, `train`, `test`, ... there was no need to make so many of them. That apart, I'm a fan of the blog. It provided a lot of useful tips on RNNs.
117913
1
117914
null
3
249
I recently realized that keras callback for early stopping returns the last epoch's weights by default. If you want to do otherwise you can use the argument `restore_best_weights=True`, as stated for example in this [answer](https://datascience.stackexchange.com/a/37507/142936) or [documentation](https://www.tensorflow.org/api_docs/python/tf/keras/callbacks/EarlyStopping). I'm quite surprised by that, as I would assume one would only be interested in the best model at the end of the training. So why is the default set to `restore_best_weights=False`? Is there any practical reason that I am missing?
What is the purpose of EarlyStopping returning last epoch's weights by default?
CC BY-SA 4.0
null
2023-01-20T10:09:02.260
2023-01-20T11:25:25.843
null
null
142936
[ "machine-learning", "deep-learning", "keras", "early-stopping" ]
There may be two reasons: - Memory and speed. This is discussed in this Keras issue (note that there is a similar question in this very site that contains the answer in the comments). This is the key paragraph of the discussion: If you want to restore the weights that are giving the best performance, you have to keep tracks on them and thus have to store them. This can be costly as you have to keep another entire model in memory and can make fitting the model slower as well. - Backward compatibility. Initially, there was no such flag and the only implemented behavior was not to restore the best weights. When adding the flag, the sensible decision would be to give it a default value that makes the old code keep its behaviour instead of silently changing it. The commit where the flag was introduced is this one. This is only speculation, as I am not the author.
Keras EarlyStopping callback: Why would I ever set restore_best_weights=False?
The default value is `restore_best_weights=False`. There may be two reasons for such a default value: - Memory and speed. This is discussed in this Keras issue. This is the key paragraph of the discussion: If you want to restore the weights that are giving the best performance, you have to keep tracks on them and thus have to store them. This can be costly as you have to keep another entire model in memory and can make fitting the model slower as well. - Backward compatibility. Initially, there was no such flag and the only implemented behavior was not to restore the best weights. When adding the flag, the sensible decision would be to give it a default value that makes the old code keep its behaviour instead of silently changing it. The commit where the flag was introduced is this one. This is only speculation, as I am not the author. Despite the default value, I would say that, if you have no problem in keeping another copy of the model in memory (i.e. because it is a huge model), the most sensible value is `restore_best_weights=True`.
117942
1
117978
null
0
135
Been doing machine learning since a few months by now. I've a grounding questions that I couldn't answer by my self. It's possible I'm asking the wrong question: When training models, like XGBoost, you can't predict over new data that hasn't the same number of columns. Does the column names matter? How does number and column names interact when predicting? Thanks in advance.
Machine learning | Column names vs number when training/predicting
CC BY-SA 4.0
null
2023-01-21T15:28:56.503
2023-01-23T15:15:44.707
2023-01-21T15:29:48.607
145077
145077
[ "machine-learning", "machine-learning-model", "xgboost" ]
It depends on the model. Some models work without column names, other require them. The models that do not require column names, take in consideration the position of the column. If your columns are not named, and are misplaced you will have a really bad time. For the best practice, keep the columns named and in the same positions as in the training set. Some models even work with less columns than the training set, however this will result in a low accuracy and will throw in a warning: "different number of features ...". If you have less columns in the new data, just train the model with the same columns. Do not train the model on more columns than you will have in the new (unseen) data.
What does the number after a machine learning model name mean?
As @Icrmorin said the naming conventions may vary but for the examples you gave, ResNet and DenseNet, the numbers in the name correspond to the number of layers: --- DenseNet Table 1 in the [Densenet paper](https://arxiv.org/pdf/1608.06993.pdf) provides an overview: [](https://i.stack.imgur.com/fB5pM.jpg) As you can see, for example, in the DenseNet-121 column this network has $1+6*2+1+12*2+1+24*2+1+16*2 + 1 = 121$ layers and that is where the name is derived from. --- ResNet The [ResNet paper](https://arxiv.org/pdf/1512.03385.pdf) provides a similar overview: [](https://i.stack.imgur.com/FoH7Z.jpg) Again, you can see how the names are derived: for example ResNet-18 has $1+2*2+2*2+2*2+2*2+1=18$ layers. --- Note that in both papers only conv. and dense layers are counted but not the pooling layers.
117967
1
118110
null
-1
91
I am trying to manually compute the predictions of the Keras library for a convolutional neural network. However, I am struggling a lot to match my final result with the ones provided by Keras. I do appreciate it if you could help me with this question. I have a $r\times c$ tensor that includes categorical values. I apply the one-hot encoding method to convert this tensor to zeros and ones, which results in a $r\times c \times m$ tensor (a multi-channel tensor). I am trying to develop a regressor CNN to predict some quantitative values. The model summary is as follows: ``` Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 27, 3, 10) 550 conv2d_1 (Conv2D) (None, 27, 3, 20) 1820 max_pooling2d (MaxPooling2D (None, 9, 1, 20) 0 ) flatten (Flatten) (None, 180) 0 dense (Dense) (None, 20) 3620 dropout (Dropout) (None, 20) 0 dense_1 (Dense) (None, 10) 210 dense_2 (Dense) (None, 1) 11 ================================================================= Total params: 6,211 Trainable params: 6,211 Non-trainable params: 0 _________________________________________________________________ ``` - My understanding is that each channel/kernel has a bias that will be added to all of its elements after the convolution. Is that correct? - For the flattening layer, my understanding is that one should start by flattening the first, channel, then the second channel, and so forth. For each channel, they need to flatten by rows (i.e., start from the first row, then the second row, and so forth). Am I correct? To fully grasp how a CNN works, I am using the weights determined by the Keras model and computing the prediction manually, and comparing it with the prediction of Keras. However, my manual calculation is too different than the Keras prediction. Would you please look at my code and let me know where I am making a mistake? To reproduce the result, you may download the sample input file uploaded on [dropbox](https://www.dropbox.com/s/1cfpo538snnp352/NeuralNet_Inst0.pkl?dl=0). Please note that this program is written for this specific test and may not be generalizable. ``` import itertools import pickle import numpy as np import tensorflow as tf from tensorflow.keras.utils import to_categorical from keras.models import Sequential from keras.layers import Dense, Conv2D, MaxPooling2D, Flatten, Dropout from sklearn.model_selection import KFold from tensorflow import keras import keras.backend as K with open('NeuralNet_Inst0.pkl', 'rb') as file: result, sol = pickle.load(file) # sol is the original input tensor cZ1 = to_categorical(sol) # the converted input tensor weight = result['weight'] # weights obtained by Keras nNC = result['nNC'] # number of neurons in convolution layers nNF = result['nNF'] # number of neurons in fully connected layers layers layers = result['layers'] # dimension of layers E = result['kSize'][0] # size of kernal P = result['pSize'][0] # size of pool actC = result['actC'] # activation function of convolution layers actF = result['actF'] # activation function of fully connected layers cRow = layers[0][0] cCol = layers[0][1] ### Convolution Layers ### cZ2 = np.zeros((cRow, cCol, nNC[0]), dtype=np.float32) w2 = weight[0] b2 = weight[1] for n in range(nNC[0]): for t, m in itertools.product(range(cRow), range(cCol)): for npp, e, ep in itertools.product(range(cZ1.shape[2]), range(-1, E-1), range(-1, E-1)): if (t+e >= 0) and (m+ep >= 0) and (t+e <= cRow-1) and (m+ep <= cCol-1): cZ2[t, m, n] += w2[e+1, ep+1, npp, n] * cZ1[t+e, m+ep, npp] cZ2[:, :, n] += b2[n] cZ2 = np.maximum(0, cZ2) # ReLU activation function cZ3 = np.zeros((cRow, cCol, nNC[1]), dtype=np.float32) w3 = weight[2] b3 = weight[3] for n in range(nNC[1]): for t, m in itertools.product(range(cRow), range(cCol)): for npp, e, ep in itertools.product(range(cZ2.shape[2]), range(-1, E-1), range(-1, E-1)): if (t+e >= 0) and (m+ep >= 0) and (t+e <= cRow-1) and (m+ep <= cCol-1): cZ3[t, m, n] += w3[e+1, ep+1, npp, n] * cZ2[t+e, m+ep, npp] cZ3[:, :, n] += b3[n] cZ3 = np.maximum(cZ3, cZ3 * result['alpha']) # leaky ReLU activation function ### flattening layer ### fZ1 = np.zeros(layers[2][0] * layers[2][1] * layers[2][2], dtype=np.float32) cnt1 = 0 cnt2 = 0 for t in range(layers[2][0]): for n in range(layers[2][2]): fZ1[cnt1] = np.max(cZ3[cnt2:cnt2 + P, :, n]) cnt1 += 1 cnt2 += P ### fully connected layers fZ2 = np.zeros(nNF[0], dtype=np.float32) w2 = weight[4] b2 = weight[5] for n in range(nNF[0]): for npp in range(w2.shape[0]): fZ2[n] += w2[npp, n] * fZ1[npp] fZ2[n] += b2[n] fZ2 = np.maximum(fZ2, fZ2*result['alpha']) # leaky ReLU activation function fZ3 = np.zeros(nNF[1], dtype=np.float32) w3 = weight[6] b3 = weight[7] for n in range(nNF[1]): for npp in range(nNF[0]): fZ3[n] += w3[npp, n] * fZ2[npp] fZ3[n] += b3[n] fZ3 = np.maximum(fZ3, fZ3*result['alpha']) # leaky ReLU activation function fZ4 = 0 w4 = weight[8] b4 = weight[9] for npp in range(nNF[1]): fZ4 += w4[npp, 0] * fZ3[npp] fZ4 += b4[0] fZ4 = np.maximum(fZ4, fZ4*result['alpha']) # leaky ReLU activation function print('My manual prediction is: ', fZ4) print('The prediction of Keras is: ', 8.0) print('The actual label of this data is: ', 0.6197) ```
Manual computation of the predictions in a convolutional neural network
CC BY-SA 4.0
null
2023-01-22T20:40:23.570
2023-01-28T03:51:31.037
2023-01-26T01:14:37.690
144907
144907
[ "neural-network", "tensorflow", "cnn", "convolutional-neural-network", "categorical-data" ]
I was able to debug the above code by obtaining the [outputs of intermediate layers](https://keras.io/getting_started/faq/#how-can-i-obtain-the-output-of-an-intermediate-layer-feature-extraction) in Keras. I replaced `nNF[1]` with `nNF[-2]` in the last loop and resolved the problem.
Simple prediction with Keras
What you are trying to do here is forecast the future values of a time series. This is a predictive problem and the future values will depend on a number of latent factors. I will assume all we have access to is historical data from the series as your question indicates. If you want to predict a future value for the time series, you should not only use the current value as an input, but rather you should use a chunk of the historical data. Since you have 18,000,000 instances, this is a lot, you can make your network quite complex in order to capture some latent trends hidden inside your data which can help predict the future value. To predict a value at time $t$ we will use the $k$ previous values. This hyper-parameter needs to be effectively tuned. # Restructure the data We will structure the data such that the features $X$ are the $k$ previous time measurements, and the output target $Y$ is the current time measurement. The one that is being estimated by the model. ``` k = 3 X, Y = [], [] for i in range(len(col1) - k): X.append(col2[i:i+k]) Y.append(col2[i+k]) X = np.asarray(X) Y = np.asarray(Y) ``` # Split your data ``` from sklearn.cross_validation import train_test_split x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.33) ``` # Using the data in a Keras model This is a simple Keras model which should work as a first iteration step. However, due to the small amount of data you provided us I cannot get any meaningful results after training. ``` import keras from keras.datasets import mnist from keras.models import Sequential from keras.layers import Dense, Dropout, Flatten from keras.layers import Conv1D, MaxPooling1D, Reshape from keras.callbacks import ModelCheckpoint from keras.models import model_from_json from keras import backend as K x_train = x_train.reshape(len(x_train), k, ) x_test = x_test.reshape(len(x_test), k, ) input_shape = (k,) model = Sequential() model.add(Dense(32, activation='tanh', input_shape=input_shape)) model.add(Dense(32, activation='tanh')) model.add(Dense(1, activation='linear')) model.compile(loss=keras.losses.mean_squared_error, optimizer=keras.optimizers.Adadelta(), metrics=['accuracy']) model.summary() epochs = 10 batch_size = 128 # Fit the model weights. history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, verbose=1, validation_data=(x_test, y_test)) ```
117996
1
118001
null
0
50
In my dataset, I have 500 abstracts. The goal is to cluster them in 2 topics. One topic must have those abstracts which contain some list of words or similar words and the rest of the abstracts in other topic. Can anyone kindly offer me suggestions to do this?
NLP topic clustering
CC BY-SA 4.0
null
2023-01-24T05:57:17.373
2023-01-24T08:08:41.627
2023-01-24T07:41:40.740
98614
145174
[ "nlp", "clustering", "unsupervised-learning" ]
for clustering the abstracts, I would suggest the following steps: - In order to make the abstracts mathematically comparable, we need to convert these to vector representation. This can be, for example, using a word2vec model to get vector representations for each meaningful word (excluding words such as 'a', 'the', for example) and maybe, for example, taking the average of the vectors to represent a single abstract. - Now we have a way to represent abstracts, we can now mathematically compare them. To cluster them, one obvious way to do this a clustering algorithm, such as K-means clustering. In your case, we want to set k (number of clusters) to 2. Note: such clustering algorithms are non-deterministic. This means that every time you run the clustering algorithm, you will get different results. If you want something more deterministic, then I would recommend something like hierarchical clustering and take the clusters, when it reaches the desired number of clusters (2 in this case).
Cluster documents based on topic similarity
As @Emre suggested, if you already have the distribution of topics in each document, you can represent each document as a vector $x_d \in \mathbb{R}^N$, where $N$ is the number of the unique topics in your collection. For documents, not exhibiting specific topics just fill the specific cells in each feature vector with zeros. Then, you can use some clustering algorithm such as [nearest neigbors](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.NearestNeighbors.html#sklearn.neighbors.NearestNeighbors), using those feature vectors. Example usage code in python below: ``` import pandas as pd import numpy as np from sklearn.metrics import pairwise_distances # Initialize some documents doc1 = {'Science':0.7, 'History':0.05, 'Politics':0.15, 'Sports':0.1} doc2 = {'News':0.3, 'Art':0.5, 'Politics':0.1, 'Sports':0.1} doc3 = {'Science':0.8, 'History':0.1, 'Politics':0.05, 'News':0.1} doc4 = {'Science':0.2, 'Weather':0.2, 'Art':0.6, 'Sports':0.1} collection = [doc1, doc2, doc3, doc4] df = pd.DataFrame(collection) # Fill missing values with zeros df.fillna(0, inplace=True) # Get Feature Vectors feature_matrix = df.as_matrix() # Get cosine similarity (i.e. 1 - cosine_distance) between pairs sims = 1-pairwise_distances(feature_matrix, metric='cosine') # Get the ranking of the documents given document 0 # from most similar to least similar. Don't take into account # the first document, because it will be the same that the query was about ranking_1 = np.argsort(sims[0,:])[::-1][1:2] print ranking_1 ranking_2 = np.argsort(sims[1,:])[::-1][1:2] print ranking_2 ``` This takes 4 documents with $N=7$ unique topics, fills missing values with zeros and creates a similarity matrix between all documents. Then querying for documents 1(Science=0.7) and 2(Art:0.5) the most similar other document in the collection, we surely get documents 3(Science:0.8) and 4(Art:0.6) correspondingly. You can try more sophisticated approaches regarding clustering and other [distance metrics](http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.DistanceMetric.html).
117998
1
118009
null
0
38
My NumPy array looks like this ``` array([-5.65998629e-02, -1.21844731e-01, 2.44745120e-01, 1.73819885e-01, -1.99641913e-01, -9.42866057e-02, ..])] ['آؤ_بھگت' array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., ..]) ] ['آؤلی' array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0..]) ] ``` When I want to search some specific word I use the built in function ``` arr_index = np.where(x == 'شعلہ_مزاجی') print(arr_index) print(x[arr_index]) ``` When I print it gives the index, but not the second value How to get the second value in numpy array?
Get the value of second dimension in numpy array
CC BY-SA 4.0
null
2023-01-24T07:32:37.160
2023-02-03T02:43:00.147
2023-01-24T07:45:04.233
98614
144994
[ "nlp", "word-embeddings", "numpy" ]
Update : I was not getting the actual index value and it will also not print if the second value does not have any encodings. ``` arr_index = np.where(x == 'یے') print(len(arr_index)) value = (arr_index[0]) print(value) result = str(value)[1:-1] result = int(result) value = x[(result)][1] print(value) ``` This will give the embeddings
Change the shape of numpy array
The trick I used is as below First del that column by the following command ``` arr = np.delete(x, 0, axis=1) ``` Second Flatten the array ad make it a list ``` flt= arr.flatten() flt =list(flt) ``` last make the new numpy array to restore the dimensions ``` new_arr = np.array(flt) ``` it gives me the desired shape
118026
1
118027
null
0
41
I'm working on an audio classification experiment. In my original database, I have 1,412 records. To improve the performance of my models, I resorted to data augmentation, applying simple techniques such as noise addition and pitch reduction. After this step, my dataset was 7060, including the original 1412 records. My big question is: how to proceed with the classification experiments now, considering that the results are better, however, I don't know if because of having original data and copies there is an "inflation" in the results. Does anyone know what the best strategy might be for this situation? Is there any way to prevent copies and originals from being in different sets (training and testing)? Sorry for the naive question.
How to perform a classification experiment with data augmentation?
CC BY-SA 4.0
null
2023-01-25T13:19:46.197
2023-01-25T13:47:01.707
null
null
143696
[ "machine-learning", "classification", "data-augmentation" ]
Data augmentation must be done after splitting your data into training and testing sets to avoid data leakage.
Data augmentation parameters
This entirely depends on your data! Generally, the more augmentation, the more situations your model will be exposed to during training and therefore the more robust it will be when being tested on unseen data. However, what if we for example were working on a model for self driving cars? Using the `vertical_flip` just doesn't make sense, because the car will (hopefully!) be er be driving along on its roof. I would suggest starting with no augmentation and slowly adding one possibility at a time. For example, you record an accuracy of 80% with no augmentation. Then add`featurewise_normalizatiom` and `featurewise_std_normalization` giving you an accuracy of 85%. Then adding `horizontal_flip` gets you to 90%. Finally you try adding `zca_whitening` and that send you back down to 86%. The reverse approach may also work well for you, starting with all augmentation parameters turned on and removing them one by one. In any case, it is completely dependent on your specific problem and your available data. [Keras' ImageDataGenerator has a long list of parameters,](https://keras.io/preprocessing/image/#imagedatagenerator-class) so having a think about what makes sense will save you a lot of time.
118035
1
118288
null
1
129
I have a large text corpus (i.e. 30 million sentences, all in lowercase in the format of Penn Treebank) that I want to use to train a neural network for natural language generation. What preprocessing steps would you recommend here? The sentences originate from formal text (i.e. books). I plan to use named entity recognition in order to replace named entities such as people, locations, and organisations during training and generation, and adding them back in for the final output. Any other suggestions?
Preprocessing advice for large text corpus in natural language generation (NLG)
CC BY-SA 4.0
null
2023-01-25T19:14:51.740
2023-02-05T03:13:24.157
null
null
141192
[ "nlp", "preprocessing", "text-generation" ]
Some comments: - With Transformers and subword vocabularies (e.g. byte-pair encoding (BPE)), usually there is no need to remove named entities because the model learns to handle them just fine. For instance, in machine translation models learn to copy them verbatim or to translate them without much problem. My advice would be not to overcomplicate things unless proven necessary. - Again, with Transformers and BPE usually there is no need for much preprocessing. If any, I would ensure there is no garbage in your data. What has worked for me in the past is to sort the sentences and eyeball the first and last sentences, where you can usually find the garbage, and remove them manually.
How to preprocess with NLP a big dataset for text classification
Let me first clarify the general principle of classification with text data. Note that I'm assuming that you're using a "traditional" method (like decision trees), as opposed to Deep Learning (DL) method. As you correctly understand, each individual text document (instance) has to be represented as a vector of features, each feature representing a word. But there is a crucial constraint: every feature/word must be at the same position in the vector for all the documents. This is because that's how the learning algorithm can find patterns across instances. For example the decision tree algorithm might create a condition corresponding to "does the document contains the word 'cat'?", and the only way for the model to correctly detect if this condition is satisfied is if the word 'cat' is consistently represented at index $i$ in the vector for every instance. For the record this is very similar to one-hot-encoding: the variable "word" has many possible values, each of them must be represented as a different feature. This means that you cannot use a different index representation for every instance, as you currently do. > Vectors generated from those texts needs to have the same dimension Does padding them with zeroes make any sense? As you probably understood now, no it doesn't. > Vectors for prediction needs also to have the same dimension as those from the training Yes, they must not only have the same dimension but also have the same exact features/words in the same order. > At prediction phase, those words that hasn't been added to the corpus are ignored Absolutely, any out of vocabulary word (word which doesn't appear in the training data) has to be ignored. It would be unusable anyway since the model has no idea which class it is related to. > Also, the vectorization doesn't make much sense since they are like [0, 1, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] and this is different to [1, 0, 2, 3, 4, 1, 2, 3, 5, 1, 2, 3] even though they both contain the same information Indeed, you had the right intuition that there was a problem there, it's the same issue as above. Now of course you go back to solving the problem of fitting these very long vectors in memory. So in theory the vector length is the full vocabulary size, but in practice there are several good reasons not to keep all the words, more precisely to remove the least frequent words: - The least frequent words are difficult to use by the model. A word which appears only once (btw it's called a hapax legomenon, in case you want to impress people with fancy terms ;) ) doesn't help at all, because it might appear by chance with a particular class. Worse, it can cause overfitting: if the model creates a rule that classifies any document containing this word as class C (because in the training 100% of the documents with this word are class C, even though there's only one) and it turns out that the word has nothing specific to class C, the model will make errors. Statistically it's very risky to draw conclusions from a small sample, so the least frequent words are often "bad features". - You're going to like this one: texts in natural language follow a Zipf distribution. This means that in any text there's a small number of distinct words which appear frequently and a high number of distinct words which appear rarely. As a result removing the least frequent words reduces the size of the vocabulary very quickly (because there are many rare words) but it doesn't remove a large proportion of the text (because the most frequent occurrences are frequent words). For example removing the words which appear only once might reduce the vocabulary size by half, while reducing the text size by only 3%. So practically what you need to do is this: - Calculate the word frequency for every distinct word across all the documents in the training data (only in the training data). Note that you need to store only one dict in memory so it's doable. Sort it by frequency and store it somewhere in a file. - Decide a minimum frequency $N$ in order to obtain your reduced vocabulary by removing all the words which have frequency lower than $N$. - Represent every document as a vector using only this predefined vocabulary (and fixed indexes, of course). Now you can train a model and evaluate it on a test set. Note that you could try different values of $N$ (2,3,4,...) and observe which one gives the best performance (it's not necessarily the lowest one, for the reasons mentioned above). If you do that you should normally use a validation set distinct from the final test set, because evaluating several times on the test set is like "cheating" (this is called [data leakage](https://en.wikipedia.org/wiki/Leakage_(machine_learning))).
118081
1
118087
null
0
30
My numpy array has the shape of (99,2) basically it has 2 columns one is the word and the other is a hot encoding vector size of 300 I want to delete the column of words and want to have only encoding vectors in order to delete I have to use the simple line of code ``` arr = np.delete(x, 0, axis=1) ``` it deletes the all words but gives me the shape of (99,1) How Can I have the shape of (99,300)
Change the shape of numpy array
CC BY-SA 4.0
null
2023-01-27T10:29:19.190
2023-01-27T12:57:51.990
null
null
144994
[ "word-embeddings", "numpy", "python-3.x", "one-hot-encoding" ]
The trick I used is as below First del that column by the following command ``` arr = np.delete(x, 0, axis=1) ``` Second Flatten the array ad make it a list ``` flt= arr.flatten() flt =list(flt) ``` last make the new numpy array to restore the dimensions ``` new_arr = np.array(flt) ``` it gives me the desired shape
How to replace values in a numpy array?
``` import numpy as np a = np.array(['PAIDOFF', 'COLLECTION', 'COLLECTION', 'PAIDOFF']) f = lambda x: 1 if x == "COLLECTION" else 0 np.fromiter(map(f,a),dtype=np.int) ``` Alternative: ``` np.where(a == "COLLECTION",1,0) ```
118085
1
118093
null
0
26
I have pandas data frame ``` data=df.loc[[0]] print(data) 0 3.5257,3.5257,3.5257000000000005,3.5257,3.5257... Name: testdata, dtype: object ``` I need to convert it to numpy array and want to plot the figure
problem on converting data to numpy array
CC BY-SA 4.0
null
2023-01-27T12:06:20.113
2023-01-27T15:39:46.667
null
null
102693
[ "pandas", "numpy", "matplotlib" ]
You can use the .values attribute of the dataframe to convert it to a numpy array, and then use the numpy array to plot the figure. Here's an example: ``` import matplotlib.pyplot as plt data = df.loc[[0]].values[0] plt.plot(data) plt.show() ``` Note that in the above code, df.loc[[0]].values[0] is used to extract the numpy array from the dataframe, as the output of df.loc[[0]] is still a dataframe. You can also use the .to_numpy() method to convert the dataframe to numpy array. ``` data = df.to_numpy() plt.plot(data) plt.show() ``` Also make sure that you have the matplotlib library imported and the data is of numerical values, otherwise you will get an error while plotting.
Error in numpy array assignment
The index of your `test_ip` array goes from 0 to 16. So 17 is indeed too big (out of bounds). You could add a try/except: ``` for counter, img in enumerate(image_generator1): try: test_ip[counter] = img except IndexError as err: print(f"Filled the array with {counter + 1} images") # f-string requires >= python 3.6 ``` Or if you know the length of that generator, just use this: ``` test_ip = np.array([im for im in image_generator1]) ```
118105
1
118108
null
2
84
In an experiment involving the comparison of classification algorithms, how can I assess whether there are statistically significant differences between the analyzed models? For example, the following models were tested, and presented the respective accuracies: CatBoost 79.32%; XGBoost 77.05%; SVM 76.77%; LightGBM 76.48%; RF 73.93%; KNN 70.25%. What method can I use to validate the choice of CatBoost over XGBoost, for example? Or let them know that KNN can be just as useful as an SVM. Details: equal dataset for all models; training/test (75/25); default values for all model; multiclass classification. Is there any way to justify the choice for one of the methods above? Something that escapes the trivial "choose the one with the highest accuracy". Is it possible to employ some hypothesis test? (e.g. Chi-square; Wilcoxon; ANOVA)
How to present a statistical justification for the choice of models with approximate accuracies?
CC BY-SA 4.0
null
2023-01-27T20:11:08.803
2023-01-27T23:50:57.843
2023-01-27T20:44:30.470
143696
143696
[ "machine-learning", "python", "classification", "hypothesis-testing", "model-evaluations" ]
A widely accepted approach for your problem is using the Critical Difference Diagram. At this time, the original [paper](https://www.jmlr.org/papers/volume7/demsar06a/demsar06a.pdf) has 12274 citations (and counting). Implementations: - Python; - Julia; - R.
What metrics determine the quality of the model?
> Unscaled and scaled r2's are not highly correlated (0.31 AAMOF). Which one would best describe the accuracy of the model on unseen data? I don't think this is a matter of which will describe the generalization error better, because both of them are describing the same thing, just on different scales. So, the advice would be to use the accuracy metric consistent with the metric that will be used for predictions on unseen data. > Why isn't the unscaled r2 the same as the scaled r2? This is because [MSE is scale dependent.](https://stats.stackexchange.com/questions/11636/the-difference-between-mse-and-mape) > The model r2 is not the same as any of the validation r2's during training (val_r2_keras). Shouldn't the trained model r2 be the same as the one reported during the training? Why do you think so? They are different because the datasets for training and for validation are different.
118114
1
118120
null
0
59
In the TensorFlow example ([https://www.tensorflow.org/tutorials/generative/dcgan#the_discriminator](https://www.tensorflow.org/tutorials/generative/dcgan#the_discriminator)) the discriminator has a single output neuron (assume batch_size=1). Then over in the training loop the generator's `BinaryCrossentropy` loss is calculated using the discriminator's output which has the shape [1]. It then calculates the loss gradients by plugging in the prediction and label into the d`BinaryCrossentropy` derivative whose resultant shape is also [1]. How is this [1] shaped gradient fed backward into the generator's layers when its shape doesn't match and the `Conv2DTranspose` layer expects gradients whose shape matches its output? $\frac{dL}{dZ} = \frac{dL}{dA}*\frac{dA}{dZ}$ <--- the first term's shape is [1] but the second term's shape is the same as the `Conv2DTranspose` output shape, can't do hadamard product. How does backpropagation still work?
GAN Generator Backpropagation Gradient Shape Doesn't Match
CC BY-SA 4.0
null
2023-01-28T10:14:18.910
2023-01-28T15:03:27.207
2023-01-28T10:15:12.987
145297
145297
[ "machine-learning", "neural-network", "gradient-descent", "backpropagation", "gan" ]
What needs to match is the input of the discriminator with the output of the generator. The gradient is backpropagated from the `[1]`-shaped loss back up through the whole discriminator and then through the generator, from its output back to its input.
GAN to generate a custom image does not work
Generating text as an image is extremely difficult and I have never seen a GAN applied in the image space to generate pages of text. The reason this is so hard is because of the way in which text is perceived by humans and the way a GAN works. Humans read arbitrary symbols which are sequenced from left to right along the same line and combined into rows. Moreover, these symbols are combined into groups which represent words. This is extremely complex. The symbols must be intelligible, the words must be real ones as invented by humans. Lastly, the combination of words into sentences need to be logical and follow guidelines of human language. And even FURTHER the sequence of sentences must be coherent to transmit a message. A GAN operating in image space will try to learn the distribution of the training set in a pixel-wise manner as that is your inputs. The distribution of the pixels will not effectively be able to group characters together in a logical manner, and the words will not be real, and the sentences will all be nonsense. You will most likely end up with blurry lines of random looking symbols, kind of like a zebra print. Another problem is the amount of data you have. Even if this problem was possible with a GAN you would need tens of thousands of instances to train a GAN effectively. --- # What I suggest I suggest to read the texts that you are training with and use this data to train a LSTM which has been shown to be very effective for generating language. But, be warned even with the best LSTMs you rarely get text that can fool humans into thinking its real. The LSTM will provide you with your text. Then you can train a GAN to generate the characters of the alphabet and you can use this generator to print out the text that the LSTM generates.
118118
1
118119
null
0
172
I built a tensorflow model to make text classification in four category, after testing and evaluating it, I need to apply it to actual data to predict the class of them, I create a predict function that return the probability of each class that this text can be. I read my data and apply prediction function using pandas. ``` df.apply(lambda x: predict(x['text']), axis=1) ``` what I need is to append predictions value to my original data frame such as: ``` text class1_prob. class2_prob class3_prob class4_prob ------ ------------ ----------- ------------ ----------- 1st string 0.1 0.2 0.4 0.3 ``` How can I achieve that if my prediction function return probabilities for one string as: ``` [[0.1 0.2 0.4 0.3]] ```
Append prediction of tensorflow to a pandas dataframe
CC BY-SA 4.0
null
2023-01-28T14:26:35.450
2023-01-28T14:45:40.300
null
null
138468
[ "tensorflow", "pandas", "prediction", "dataframe" ]
After searching I found a way by use pd.concatenate to concatenate the predictions dataframe with original data frame ``` predictions = df.apply(lambda x: predict(x['text']), axis=1) predictions_df = pd.DataFrame(predictions.tolist(), columns=\ ['class1_prob', 'class2_prob', 'class3_prob', 'class4_prob']) df = pd.concat([df, predictions_df], axis=1) ```
Prediction on timeseries data using tensorflow
The error is caused by this line: ``` print('epoch',epoch,'MSE=',mse.eval()) ``` This happens because the tensor `mse` also depends on the placeholders `X` and `y`. One way to fix this would be to change the training loop to be: ``` for X_batch,y_batch in generate_batch(input,output,batch_size): mse_val, _ = session.run([mse, training_op],feed_dict={X:X_batch,y:y_batch}) if epoch % 10 == 0: print('epoch',epoch,'MSE=',mse_val) ``` Also you will need to switch `X` back to `tf.float32` since `tf.matmul` is not compatible with int and float. The data will automatically be casted once you feed it in. To add a bias variable, you can do it similarly to how you define `theta`. ``` b = tf.Variable(0.0, dtype=tf.float32, name='b') ... y_predictions += b ```
118121
1
118300
null
1
181
I am having issues with audio embedding using the wav2vec library while trying to classify emotions using audio signals from the EMODB dataset (Emotions dataset in German). I am using the following code to extract embeddings: ``` feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('facebook/wav2vec2-large-xlsr-53') #XLSR is for SR, not specifically Emotion Rec. input_audio, sample_rate = librosa.load(emodb + file, sr=16000) extraction = feature_extractor(input_audio, sampling_rate=16000, return_tensors="np", padding="max_length", max_length=max_len).input_values ``` The embeddings and shape of the vectors are: - (1, 143652) for wav2vec features - (3, 162) for mfcc features Please note I have padded them to highest value. The length of audio files is around 1 to 2 seconds. [](https://i.stack.imgur.com/bL11B.png) My intended task is emotion detection. I plan to use these embeddings from audio file, along with the text, for a downstream model for emotion classification, and for this I plan to use multimodal approach, using audio and text embeddings. So, I trained an LSTM model on these embeddings but it was constantly overfitting on the training data (~100% accuracy and ~20% on testing). Then I decided to use wav2vec embeddings and MFCC embeddings for a simple classification task using SVM. When I use the resulting embeddings in a simple SVM classifier, I am getting random results (15-30% accuracy) for wav2vec embeddings. As a comparison, when I extract features using MFCC and use them in the same classifier, I am getting an accuracy of around 70%.[](https://i.stack.imgur.com/u5dbK.png) Naturally, I visualized the embeddings using TSNE to check the quality of input and, I found to be getting strange results. Specifically, when I map 7 emotions, the resulting plot forms a spiral shape. When I only map 2 emotions, the resulting plots are different and also strange. The mappings are circular again when I add more features (3+).[](https://i.stack.imgur.com/HOno6.png) I am unable to understand why I am getting these results and why the embeddings are so poor. I am wondering if this is because I am using a general XLSR model without fine-tuning it for emotion recognition. I would appreciate any suggestions on how to extract features using wav2vec in a better way, or any papers or implementations that may be helpful.
Issues with audio embedding using wav2vec
CC BY-SA 4.0
null
2023-01-28T15:21:58.817
2023-02-05T13:05:06.830
2023-01-30T07:22:39.777
130360
130360
[ "nlp", "feature-extraction", "audio-recognition", "tsne" ]
> I am wondering if this is because I am using a general XLSR model without fine-tuning it for emotion recognition. That might be still true but your approach contains a fundamental error you should eliminate first. You are using the class [Wav2Vec2FeatureExtractor](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor) to extract the features from an audio file, but this class is not a neural network. It is a preprocessor that pads and normalizes the floating point time series from librosa. As stated by the [documentation](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L85) the normalization makes sure that the array has: > zero mean and unit variance These features, will therefore only makes sense for the model that was trained with it. When you trained an SVM with it, you actually compared if an SVM can beat wav2vec2 and not if wav2vec2 is better than MFCC! To get the actual embeddings from the wav2vec2 model, you can use the following code: ``` import librosa import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2Model input_audio, sample_rate = librosa.load("/content/bla.wav", sr=16000) model_name = "facebook/wav2vec2-large-xlsr-53" feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name) model = Wav2Vec2Model.from_pretrained(model_name) i= feature_extractor(input_audio, return_tensors="pt", sampling_rate=sample_rate) with torch.no_grad(): o= model(i.input_values) print(o.keys()) print(o.last_hidden_state.shape) print(o.extract_features.shape) ``` Output: ``` odict_keys(['last_hidden_state', 'extract_features']) torch.Size([1, 1676, 1024]) torch.Size([1, 1676, 512]) ``` Please refer to this [StackOverflow post](https://stackoverflow.com/a/69275576) for the difference between `last_hidden_state` and `extract_features`. As you can see, the features are multi-dimensional for my file ([bacth_size, seq_len, hidden_size]), which means you probably want to apply some pooling (e.g. mean). P.S.: Another point that comes to my mind when I look at your question, is if those embeddings are actually meaningful by themselves. For the pure BERT, [we know](https://github.com/google-research/bert/issues/164#issuecomment-441324222) that the sentence embeddings: > I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors. I assume you will probably need to fine-tune wav2vec2 a bit. You can use huggingfaces [Wav2Vec2ForSequenceClassification](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification) class for that.
How to extract embeddings from an audio file using wav2vec along with context
As answer and explained in detail by @cronoik on this [post](https://datascience.stackexchange.com/questions/118121/issues-with-audio-embedding-using-wav2vec), following is the code to get wav2vec embeddings. ``` import librosa import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2Model input_audio, sample_rate = librosa.load("/content/bla.wav", sr=16000) model_name = "facebook/wav2vec2-large-xlsr-53" feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name) model = Wav2Vec2Model.from_pretrained(model_name) i= feature_extractor(input_audio, return_tensors="pt", sampling_rate=sample_rate) with torch.no_grad(): o= model(i.input_values) print(o.keys()) print(o.last_hidden_state.shape) print(o.extract_features.shape) ``` OUTPUT: ``` odict_keys(['last_hidden_state', 'extract_features']) torch.Size([1, 1676, 1024]) torch.Size([1, 1676, 512]) ``` The features are multi-dimensional for sample file ([bacth_size, seq_len, hidden_size]), and probably will need some pooling (e.g. mean) to be applied. EXPLANATION: - feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name) loads the Wav2Vec2FeatureExtractor component of the Wav2Vec2 architecture using the from_pretrained method from the transformers library. This component is used for normalizing the audio signals. - model = Wav2Vec2Model.from_pretrained(model_name) loads the Wav2Vec2Model component of the Wav2Vec2 architecture, which is used for generating representations of the audio signals. - i= feature_extractor(input_audio, return_tensors="pt", sampling_rate=sample_rate) normalizes the input audio signal input_audio by subtracting the mean and dividing by the standard deviation, and returns the normalized signal as a PyTorch tensor. The sampling_rate and return_tensors arguments are also passed. - with torch.no_grad(): is a context manager that disables gradient computation during the forward pass of the model, reducing memory usage and speeding up computation. - o= model(i.input_values) generates a representation of the normalized audio signal i.input_values using the Wav2Vec2Model component. The representation is returned as a dictionary o containing multiple outputs.
118124
1
118305
null
0
337
I am trying to use wav2vec embeddings from the XLSR model for emotion recognition on the EMODB dataset. How can I extract embeddings using wav2vec? I want to use the XLSR model pre-trained with wav2vec, but I am not sure how to extract embeddings from audio files to use for emotion recognition. I have made attempt like following but they are not correct, this results in random mappings. ``` feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained('facebook/wav2vec2-large-xlsr-53') #XLSR is for SR, not specifically Emotion Rec. input_audio, sample_rate = librosa.load(emodb + file, sr=16000) extraction = feature_extractor(input_audio, sampling_rate=16000, return_tensors="np", padding="max_length", max_length=max_len).input_values ``` Are there any series of steps to follow or libraries or methods I can use to extract the embeddings? Are there any examples or tutorials that I can follow to get started?
How to extract embeddings from an audio file using wav2vec along with context
CC BY-SA 4.0
null
2023-01-28T18:31:45.083
2023-02-05T18:50:30.037
null
null
130360
[ "nlp", "feature-extraction", "transformer", "audio-recognition" ]
As answer and explained in detail by @cronoik on this [post](https://datascience.stackexchange.com/questions/118121/issues-with-audio-embedding-using-wav2vec), following is the code to get wav2vec embeddings. ``` import librosa import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2Model input_audio, sample_rate = librosa.load("/content/bla.wav", sr=16000) model_name = "facebook/wav2vec2-large-xlsr-53" feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name) model = Wav2Vec2Model.from_pretrained(model_name) i= feature_extractor(input_audio, return_tensors="pt", sampling_rate=sample_rate) with torch.no_grad(): o= model(i.input_values) print(o.keys()) print(o.last_hidden_state.shape) print(o.extract_features.shape) ``` OUTPUT: ``` odict_keys(['last_hidden_state', 'extract_features']) torch.Size([1, 1676, 1024]) torch.Size([1, 1676, 512]) ``` The features are multi-dimensional for sample file ([bacth_size, seq_len, hidden_size]), and probably will need some pooling (e.g. mean) to be applied. EXPLANATION: - feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name) loads the Wav2Vec2FeatureExtractor component of the Wav2Vec2 architecture using the from_pretrained method from the transformers library. This component is used for normalizing the audio signals. - model = Wav2Vec2Model.from_pretrained(model_name) loads the Wav2Vec2Model component of the Wav2Vec2 architecture, which is used for generating representations of the audio signals. - i= feature_extractor(input_audio, return_tensors="pt", sampling_rate=sample_rate) normalizes the input audio signal input_audio by subtracting the mean and dividing by the standard deviation, and returns the normalized signal as a PyTorch tensor. The sampling_rate and return_tensors arguments are also passed. - with torch.no_grad(): is a context manager that disables gradient computation during the forward pass of the model, reducing memory usage and speeding up computation. - o= model(i.input_values) generates a representation of the normalized audio signal i.input_values using the Wav2Vec2Model component. The representation is returned as a dictionary o containing multiple outputs.
Issues with audio embedding using wav2vec
> I am wondering if this is because I am using a general XLSR model without fine-tuning it for emotion recognition. That might be still true but your approach contains a fundamental error you should eliminate first. You are using the class [Wav2Vec2FeatureExtractor](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2FeatureExtractor) to extract the features from an audio file, but this class is not a neural network. It is a preprocessor that pads and normalizes the floating point time series from librosa. As stated by the [documentation](https://github.com/huggingface/transformers/blob/820c46a707ddd033975bc3b0549eea200e64c7da/src/transformers/models/wav2vec2/feature_extraction_wav2vec2.py#L85) the normalization makes sure that the array has: > zero mean and unit variance These features, will therefore only makes sense for the model that was trained with it. When you trained an SVM with it, you actually compared if an SVM can beat wav2vec2 and not if wav2vec2 is better than MFCC! To get the actual embeddings from the wav2vec2 model, you can use the following code: ``` import librosa import torch from transformers import Wav2Vec2FeatureExtractor, Wav2Vec2Model input_audio, sample_rate = librosa.load("/content/bla.wav", sr=16000) model_name = "facebook/wav2vec2-large-xlsr-53" feature_extractor = Wav2Vec2FeatureExtractor.from_pretrained(model_name) model = Wav2Vec2Model.from_pretrained(model_name) i= feature_extractor(input_audio, return_tensors="pt", sampling_rate=sample_rate) with torch.no_grad(): o= model(i.input_values) print(o.keys()) print(o.last_hidden_state.shape) print(o.extract_features.shape) ``` Output: ``` odict_keys(['last_hidden_state', 'extract_features']) torch.Size([1, 1676, 1024]) torch.Size([1, 1676, 512]) ``` Please refer to this [StackOverflow post](https://stackoverflow.com/a/69275576) for the difference between `last_hidden_state` and `extract_features`. As you can see, the features are multi-dimensional for my file ([bacth_size, seq_len, hidden_size]), which means you probably want to apply some pooling (e.g. mean). P.S.: Another point that comes to my mind when I look at your question, is if those embeddings are actually meaningful by themselves. For the pure BERT, [we know](https://github.com/google-research/bert/issues/164#issuecomment-441324222) that the sentence embeddings: > I'm not sure what these vectors are, since BERT does not generate meaningful sentence vectors. I assume you will probably need to fine-tune wav2vec2 a bit. You can use huggingfaces [Wav2Vec2ForSequenceClassification](https://huggingface.co/docs/transformers/model_doc/wav2vec2#transformers.Wav2Vec2ForSequenceClassification) class for that.
118135
1
118415
null
0
101
I'm using TensorFlow decision forest to predict the suitable crop based on few parameters. How do i get the predict() method to return the label ? Im using [this](https://www.kaggle.com/datasets/atharvaingle/crop-recommendation-dataset) dataset for training My code ``` import tensorflow_decision_forests as tfdf import tensorflow as tf import pandas as pd import numpy as np df = pd.read_csv("Crop_recommendation.csv") #TensorFlow dataset train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(df,label="label") # Train the model model = tfdf.keras.RandomForestModel() model.fit(train_ds) print(model.summary()) pd_serving_dataset = pd.DataFrame({ "N": [83], "P": [45], "K" : [30], "temperature" : [25], "humidity" : [80.3], "ph" : [6], "rainfall" : [200.91], }) tf_serving_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(pd_serving_dataset) prediction = model.predict(tf_serving_dataset) print(prediction) ``` My Output ``` 1/1 [==============================] - 0s 38ms/step [[0. 0. 0. 0. 0.02333334 0.07666666 0.04666667 0. 0.08333332 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.7699994 0. ]] ``` Expected Output `rice`
TensorFlow random forest get label as prediction output
CC BY-SA 4.0
null
2023-01-29T14:27:34.820
2023-02-09T13:11:05.260
null
null
145360
[ "tensorflow", "random-forest" ]
The same question was asked on [Stackexchange](https://stackoverflow.com/questions/75287621/tensorflow-random-forest-get-label-as-prediction-output/75398674), copying my answer from there For a classification problem, Tensorflow Decision Forests returns the probabilities for each class as a numpy array. If you need the class names, you have to find the class with the highest probability and map it back to its name. Since Keras expects class names to be integers, TF-DF converts them silently during the `tfdf.keras.pd_dataframe_to_tf_dataset` by sorting and mapping to integers. To get the names, you therefore have to revert the mapping. Overall, you would get ``` classification_names = df["label"].unique().tolist().sort() prediction = model.predict(tf_serving_dataset) class_predictions = list(map(lambda x: classification_names[x] , list(np.argmax(prediction, axis=1)))) # Since you only predicted a single example, class_predictions = ['rice'] ``` Warning: TF-DF might change the way `tfdf.keras.pd_dataframe_to_tf_dataset` maps classes to integers at some point in the future. It would be more prudent to perform the mapping yourself by preprocessing the pandas dataframe with ``` classes = df[label].unique().tolist() print(f"Label classes: {classes}") dataset_df[label] = df[label].map(classes.index) ``` For more information on how to make (fast) predictions with TF-DF, you can also check out the [TF-DF predictions colab](https://www.tensorflow.org/decision_forests/tutorials/predict_colab).
how to get prediction from trained random forest model?
Use the CountVectorizer you have fitted to preprocess your custom input then feed it to your model for prediction. ``` custom_input = ['insert text here'] custom_input = count_vectorizer.transform(custom_input) custom_prediction = random_forest.predict(custom_input) ```
118166
1
118177
null
0
29
I have a data set with 96 rows. It contains date, source, spend and number of customers. I have 4 different sources that generate customers and you can see in the dataset how much I spend and how many customers I have received by month for the last two years. I have a budget for 2023 for each source by month and I would like to know how many customers will I get for 2023. EXAMPLE DATA SET [Click to see data](https://docs.google.com/spreadsheets/d/1WJAt8fvlF26QlbNxqPwaMtzLeI1qg927i-zfEIJZSZk/edit#gid=0) I have tried a few models such as multilinear regression but it didn't perform great. Any ideas? Thanks
ML Predicted Model for 2 values
CC BY-SA 4.0
null
2023-01-30T22:20:22.347
2023-01-31T10:46:24.140
2023-01-30T22:33:12.893
145398
145398
[ "machine-learning", "python", "data-science-model", "unsupervised-learning" ]
This is a nice example of (rather small) time series analysis. And given how small the dataset is, you will have a hard time making it good, in the first place. I would default to AR(I)MA analysis, given you have only two years of data that would not allow for detection of seasonality. Make a different, but as good as possible ARIMA model for every source and make the prediction the sum of the sources (expected prediction of the sum is the same as the sum of the predictions). I have not seen a clearer example of data series analysis, just pick the method of modelling properly (forget about trees on such a small dataset, especially since you are predicting a continuous target)
Predict Two Variables vs Predict One Variable with Two Models
I think OP meant a multi-class model that predicts an outcome variable with multiple classes versus building multiple separate binary classification models for each class. Indeed these two modeling techniques are different, and should be used differently according to the problem. ## Multi-class Classification Problems These are problems where you have to assign cases to a dependent variable with multiple categories/classes/outcome. Classes are mutually exclusive, meaning that observations can be assigned to only one category at a time (estimated probability for every category of $Y$ adds up to one). Many algorithms exists for this type of problem. For example Random Forest, Multinomial Logistic Regression, Boosted Trees, Linear Discriminant Analysis, etc. Each with their own set of assumptions. In the muli-class case, multinomial logisitc regression actually picks a "pivot" class and runs a binary logistic regression for each class regressing on the pivot class. ## Multi-label Classification Problems These are problems where each case can be assigned to more than one category. The dependent variable is still multi-class, but we cannot use a multi-class classification model because the categories are not mutually exclusive. There exist several ways to deal with multi-label problems, one of which is to transform a multi-label problem to multiple distinct binary classification problems. This is known as a binary relevance transformation. This can be done by simply treating each class of the dependent variable as a binary outcome (in that specific class or not), and running a binary classification method (like binary logistic regression) on each of them. This is similar to the multinomial logsitic regression case because multiple binary logistic regressions are run, but it is also different because each model in this case is independent and do not depend on a chosen pivot class. ## Multi-class or Multi-label? To answer your question: "Multi-class to predict a multi-category dependent or several binary models to predict each class?". It really depends on your problem. Do you want to assign multiple classes to each observation? If so, you have a multi-label problem and the binary relevance transformation is a good way to model. Are categories in your dependent variable instead mutually exclusive? In this case, use multi-class classification models. Note: The downside of binary relevance is that it treats each category of the dependent variable to be independent, thus ignoring any dependencies across different classes. A label powerset transformation is a good alternative. This transforms the dependent variable into a multi-class variable with each class representing the occurrence of a combination of the original classes. A multi-class model is then run on these new combination classes. This takes into account the co-occurrence of classes not just single occurrences.
118170
1
118252
null
0
48
As part of a statistical learning research paper I am collaborating on, I am running/fitting two hundred sixty thousand different LASSO Regressions on the same number of different randomly generated synthetic data sets and calculating some standard classification performance measurements (True Positive Rate, True Negative Rate, and False Positive Rate) so that I can use these measurements as Benchmark performance metrics to compare the performance of the novel supervised statistical learning algorithm for variable selection eating evaluated in the paper/study on the same 260K data sets with. I am going to be using the statistical programming language are for this purpose because that is the programming language suitable for this task which I am most comfortable with by far. What would be the best function and corresponding package to use for this task? I will accept any suggestion BESIDES the enet function from the elasticnet package because I have had issues working with this function in the past. p.s. I understand that in order to get it to work on the 260K data sets sequentially, whatever function it turns out to be best suited, it will need to be implemented within an lapply or a parLapply function.
The ideal function in R for fit fitting n LASSO Regressions on n data sets
CC BY-SA 4.0
null
2023-01-31T05:32:18.903
2023-02-03T00:25:44.997
null
null
105709
[ "machine-learning", "r", "linear-regression", "research", "lasso" ]
Brian Spiering's answer was correct Marlen, however, I clicked on the link you posted in a comment below his answer and spotted the problem. You have alpha = 0 instead of alpha = 1, not sure if this was a typo or you were confused which means what, but you want it to be set equal to 1 in your glmnet function. So, what you want is: ``` # fitting the n LASSO Regressions using glmnet set.seed(11) # to ensure replicability system.time(LASSO.fits <- lapply(datasets, function(i) glmnet(x = as.matrix(select(i, starts_with("X"))), y = i$Y, alpha = 0))) ``` Hope it works for you!
Ridge and Lasso Regularization
The penalty of both Lasso and Ridge is proportional to the magnitude of the weight. That is, the penalization added to the cost function is $\lambda ||\omega||_2$ or $\lambda ||\omega||_1$. Wether it is more convenient to apply regularization or feature selection, Lasso already does some feature selection for you, as the estimated weights for Lasso are sparse (there will be many coefficients equal to 0). About multi-colinearity, Ridge tends to eliminate variables that are colinear, while Lasso doesn't. All that I have said is taken from An Introduction to Statistical Learning book, from James, Witten, Hastie and Tibshirani.
118217
1
118240
null
0
60
I am getting confused in the testing dataset of a VAE. After training the VAE, what should be the testing data-set of the VAE? I understand that during testing the VAE only has the decoder part. Hence, we need to give inputs from the latent space. But what input shall be there? It can't be any random set of numbers, right? Thanks for the help
What is the dataset during testing a Variational auto-encoder?
CC BY-SA 4.0
null
2023-02-01T15:11:04.187
2023-02-02T11:10:09.407
null
null
145110
[ "deep-learning", "autoencoder", "vae" ]
When building your VAE model, you must use the common [training/testing datasets](https://en.wikipedia.org/wiki/Training,_validation,_and_test_data_sets) to train/evaluate your VAE model performances. That means you validate your model using the testing dataset on both the encoder and the decoder parts. > I understand that during testing the VAE only has the decoder part. You're probably refering to one common use of VAE, that is to generate new samples using only the decoder part, but it comes after building a model. I hope it helps.
Transform an Autoencoder to a Variational Autoencoder?
Yes. Two changes are required to convert an AE to VAE, which shed light on their differences too. Note that if an already-trained AE is converted to VAE, it requires re-training, because of the following changes in the structure and loss function. Network of AE can be represented as $$x \overbrace{\rightarrow .. \rightarrow y \overset{f}{\rightarrow}}^{\mbox{encoder}} z \overbrace{\rightarrow .. \rightarrow}^{\mbox{decoder}}\hat{x},$$ where - $x$ denotes the input (vector, matrix, etc.) to the network, $\hat{x}$ denotes the output (reconstruction of $x$), - $z$ denotes the latent output that is calculated from its previous layer $y$ as $z=f(y)$. - And $f$, $g$, and $h$ denote non-linear functions such as $f(y) = \mbox{sigmoid}(Wy+B)$, $\mbox{ReLU}$, $\mbox{tanh}$, etc. These two changes are: - Structure: we need to add a layer between $y$ and $z$. This new layer represents mean $\mu=g(y)$ and standard deviation $\sigma=h(y)$ of Gaussian distributions. Both $\mu$ and $\sigma$ must have the same dimension as $z$. Every dimension $d$ of these vectors corresponds to a Gaussian distribution $N(\mu_d, \sigma_d^2)$, from which $z_d$ is sampled. That is, for each input $x$ to the network, we take the corresponding $\mu$ and $\sigma$, then pick a random $\epsilon_d$ from $N(0, 1)$ for every dimension $d$, and finally compute $z=\mu+\sigma \odot \epsilon$, where $\odot$ is element-wise product. As a comparison, $z$ in AE was computed deterministically as $z=f(y)$, now it is computed probabilistically as $z=g(y)+h(y)\odot \epsilon$, i.e. $z$ would be different if $x$ is tried again. The rest of network remains unchanged. Network of VAE can be represented as $$x \overbrace{\rightarrow .. \rightarrow y \overset{g,h}{\rightarrow}(\mu, \sigma) \overset{\mu+\sigma\odot \epsilon}{\rightarrow} }^{\mbox{encoder}} z \overbrace{\rightarrow .. \rightarrow}^{\mbox{decoder}}\hat{x},$$ - Objective function: we want to enforce our assumption (prior) that the distribution of factor $z_d$ is centered around $0$ and has a constant variance (this assumption is equivalent to parameter regularization). To this end, we add a penalty per dimension $d$ that punishes any deviation of latent distribution $q(z_d|x) = N(\mu_d, \sigma_d^2)$$= N(g_d(y), h_d(y)^2)$ from unit Gaussian $p(z_d)=N(0, 1)$. In practice, KL-divergence is used for this penalty. At the end, the loss function of VAE becomes: $$L_{VAE}(x,\hat{x},\mu,\sigma) = L_{AE}(x, \hat{x}) + \overbrace{\frac{1}{2} \sum_{d=1}^{D}(\mu_d^2 + \sigma_d^2 - 2\mbox{log}\sigma_d - 1)}^{KL(q \parallel p)}$$ where $D$ is the dimension of $z$. Side notes - In practice, since $\sigma_d$ can get very close to $0$, $\mbox{log}\sigma_d$ in objective function can explode to large values, so we let the network generate $\sigma'_d = \mbox{log}\sigma_d = h_d(y)$ instead, and then use $\sigma_d = exp(h_d(y))$. This way, both $\sigma_d=exp(h_d(y))$ and $\mbox{log}\sigma_d=h_d(y)$ would be numerically stable. - The name "variational" comes from the fact that we assumed (1) each latent factor $z_d$ is independent of other factors, i.e. we ignore other $(\mu_{d'}, \sigma_{d'})_{d' \neq d}$ when we sample $z_d$, and (2) $z_d$ follows a Gaussian distribution. In other words, $q(z|x)$ is a simplified variation to the true (and probably a more complex) distribution $p(z|x)$.
118230
1
118346
null
0
41
[ModelCheckpoint](https://keras.io/api/callbacks/model_checkpoint/) > save_best_only: if save_best_only=True, it only saves when the model is considered the "best" and the latest best model according to the quantity monitored will not be overwritten. If filepath doesn't contain formatting options like {epoch} then filepath will be overwritten by each new better model. [EarlyStopping](https://keras.io/api/callbacks/early_stopping/) > restore_best_weights: Whether to restore model weights from the epoch with the best value of the monitored quantity. If False, the model weights obtained at the last step of training are used. An epoch will be restored regardless of the performance relative to the baseline. If no epoch improves on baseline, training will run for patience epochs and restore weights from the best epoch in that set. --- If I train my model and save the best model and restore the weights of the best epoch... - am I not doing the same thing twice? Would it not just produce two model files, one for the epoch and one for the final model but both actually being the same? Then if this is correct which would be the preferred method to use? (As I understand, models are sometimes held in memory EarlyStopping for but not sure about model_checkpoint ModelCheckpoint)
Tensorflow / Keras - Using both ModelCheckpoint: save_best_only and EarlyStopping: restore_best_weights
CC BY-SA 4.0
null
2023-02-02T00:38:47.597
2023-02-07T10:10:50.770
null
null
122323
[ "machine-learning", "neural-network", "keras", "tensorflow", "early-stopping" ]
The former saves the weights of the model at the epoch where it performed the best on the validation set, while the latter restores those saved weights into the model and use it for predictions. When you save the weights of a model using the ModelCheckpoint callback during training, the weights are saved to disk (e.g., to a .h5 file) at specified checkpoints (e.g., after every epoch). The purpose of saving the weights is to be able to restore them later for predictions, in case you need to stop the training for some reason, or if you want to use the weights for inference on a different dataset. Once the training is complete, you can restore the weights of the best performing model by loading them back into the model architecture, and then use the model for predictions. The difference between early stopping and saving the weights using ModelCheckpoint is that early stopping saves the weights automatically based on a criterion (the performance on the validation set), while ModelCheckpoint saves the weights at specified intervals (e.g., after every epoch). So, in the case of early stopping, you don't have to specify when to save the weights, because the algorithm stops training automatically and saves the weights when the performance on the validation set stops improving. On the other hand, with ModelCheckpoint, you have more control over when to save the weights, but you have to manually stop the training when the performance is no longer improving. In summary, saving the weights during training allows you to persist the state of the model, so that you can continue training or use the model for predictions later. In terms of preferred method, it depends on your use case. If you have limited memory, you may only keep the best model's weights in memory, and use the ModelCheckpoint to periodically save the best weights to disk. If memory is not a concern, you could keep all intermediate models in memory and use the EarlyStopping to stop training once the performance on the validation set stops improving.
Keras ModelCheckpoint Callback returning weights only even though both save_best_only & save_weights_only are set to False
EDIT: I don't believe I really answered your question. Setting save_best_only to False is supposed to let the model save after every specified epoch - this does not currently work. Save_weights_only means it only saves the weights and not the full model. You would have to first define the model then load the weights if you do this. If False, you could load the model with having to redefine it. Yes, I believe this is a bug with Tensorflow. There was an open issue about this on the GitHub repo for Tensorflow, but I don't remember the link to the page. Effectively, is you do as you said, Tensorflow will still only save the model if there is an improvement in neural networks performance. Furthermore, passing "epochs" selects how many times the neural network's performance needs to improve before saving the weights. One theoretical work around is to create your own Callback to save after every n epoch. It is not difficult to do. [Here](https://www.tensorflow.org/guide/keras/custom_callback#examples_of_keras_callback_applications) is the documentation. Some really rough psuedo code: ``` class SaveAtEpoch(keras.callbacks.Callback): def __init__(self, save_frequency,filepath): super(SaveAtEpoch,self).__init__ self.save_frequency = save_frequency self.filepath = filepath def on_epoch_end(self, epoch, logs=None): if epoch%self.frequency == 0: self.model.save(self.filepath) ```
118243
1
118247
null
1
51
I was just informed this community was a better fit for my [SO question](https://stackoverflow.com/questions/75316894/binary-classification-text-based-on-embedding-distance). I am wondering if I can use a Milvus or Faiss (L2 or IP or...) to classify documents as similar or not based on distance. I have [vectorized](https://towardsdatascience.com/hugging-face-transformers-fine-tuning-distilbert-for-binary-classification-tasks-490f1d192379) text from news articles and stored into Milvus and Faiss to try both out. What I don't want to do is retrain a model every time I add new article embeddings and have to worry about data set balance, do I have to change my LR, etc. I would like to store embeddings and return the Top1 result for each new article that I'm reading and if the distance is "close" save that new article to Milvus/Faiss else discard. Is that an acceptable approach to binary classification of text from your point of view? If so with DistilBert embeddings, is magnitude (L2) a better measurement or Orientation (IP)? When I say "close" this isn't a production idea for work just and idea that I can't think through or find other people explaining online, I would expect accuracy of "close" to be some ballpark threshold... [](https://i.stack.imgur.com/1fhoA.jpg) As a Cosine Similarity example (Figure1) if OA and OB exist in Milvus/Faiss DB and I search with a new embedding OC I would get OB closest to OC at 0.86 and if the threshold for keeping is say > 0.51 I would keep the 0C. As an L2 example (Figure1) if A' and B' exist in my Milvus/Faiss DB and I search for C' with a threshold of say < 10.5 I would reject C' as B' is closest to C' at 20.62. [Figure 1 - medium article](https://medium.com/@sasi24/cosine-similarity-vs-euclidean-distance-e5d9a9375fc8)
Binary Classification [Text] based on Embedding Distance?
CC BY-SA 4.0
null
2023-02-02T14:53:40.270
2023-02-02T16:58:07.790
2023-02-02T15:05:14.773
145516
145516
[ "machine-learning", "deep-learning", "classification", "word-embeddings" ]
The are two levels to your question: - Conceptual - Yes, you can perform an approximate nearest neighbor search on text documents that have been embedded. What you call binary classification is more commonly called anomaly detection when the data is not labeled. Often times in anomaly detection there is a threshold for similar or not. - Implementation - Milvus is a database. Faiss is a vector library. The specific implementation will depend on the system is architectured.
Use embeddings to find similarity between documents
There are several ways you can obtain document embeddings. If you want to obtain a vector of a document that is not part of the trained doc2vec model, gensim provides a method called `infer_vector` which allows to you map embeddings. You can also use [bert-as-service](https://github.com/hanxiao/bert-as-service) to generate sentence level embeddings. I would recommend using Google's [Universal Sentence Encoder](https://tfhub.dev/google/universal-sentence-encoder/4) (USE) model to generate sentence embeddings if your goal is to find some sort of similarity between sentences or documents. There are ways you can combine sentence level embeddings to document level, the first step to try would be to take the mean, or you could generate sentence embeddings for a sliding window over the document and take the mean of that. The reason I recommend USE over BERT is that, USE was trained specially for sentence similarity tasks whereas BERT, even though can be applied to any NLP task was original trained to predict words in a sentence or complete a sentence. You might find this [link](https://blog.floydhub.com/when-the-best-nlp-model-is-not-the-best-choice/) helpful, it draws a great comparison between USE and BERT, and why it is important to choose a model based on task.
118271
1
118282
null
0
24
First execuse any naive statement you may find below, i'm a newcomer to the field. How do web applications that integrate fine-tuning of large machine learning/deep learning models handle the storage and retrieval of these models for inference? I'm trying to implement a web app that allows users to fine-tune a stable diffusion model using their own images with dreambooth. as the fine-tuned model is quite large reaching several gigabytes. After the model is trained and saved, the app should retrieve and use the model for inference each time a user visits the site and requests one. The current approach I am considering is to store the fine-tuned model in a compressed format in a S3 or R2 bucket. Each time a user visits the web app and requests an inference, I would retrieve the model from the bucket, decompress it, and run the inference. that being said adding the overhead of fetching + decompression to inference is obviously not a good idea. I'm sort of sure that there's a standard approach that the machine learning community follows for handling such scenarios, what are those if they exist ? how typically these scenarios are handled ?
Best practices for serving user-specific large models in a web application?
CC BY-SA 4.0
null
2023-02-03T15:47:42.580
2023-02-04T08:42:49.183
null
null
145559
[ "python", "training", "generative-models", "inference", "consumerweb" ]
No idea about standard approaches but one option you have is: instead of fine-tuning the whole model, fine-tune only a part of it. For instance, you may fine-tune only the last layers few layers. This way, you can keep loaded the common part of the model, load just the small fine-tuned part and combine them to perform inference. This would reduce both storage space and decompression time, at the cost of more complex code logic. Of course, you should first determine what are the minimum fine-tuned parts of the model that let you get the desired output quality.
Large Scale Personalization - Per User vs Global Models
The answer to this question is going to vary pretty wildly depending on the size and nature of your data. At a high level, you could think of it as a special case of multilevel models; you have the option of estimating a model with complete pooling (i.e., a universal model that doesn't distinguish between users), models with no pooling (a separate model for each user), and partially pooled models (a mixture of the two). You should really read Andrew Gelman on this topic if you're interested. You can also think of this as a learning-to-rank problem that either tries to produce point-wise estimates using a single function or instead tries to optimize on some list-wise loss function (e.g., NDCG). As with most machine learning problems, it all depends on what kind of data you have, the quality of it, the sparseness of it, and what kinds of features you are able to extract from it. If you have reason to believe that each and every user is going to be pretty unique in their behavior, you might want to build a per-user model, but that's going to be unwieldy fast -- and what do you do when you are faced with a new user?
118342
1
118694
null
0
67
I want to build an image generator model of interior room design. This model should be able to generate an interior image of a living room/bedroom/hall/kitchen/bathroom. I have searched about it and found out the following websites. [https://interiorai.com/](https://interiorai.com/) [https://image.computer/](https://image.computer/) And I made this picture when visiting [https://image.computer](https://image.computer). [](https://i.stack.imgur.com/hrHuP.png) Above result was perfectly what I want. But free account was restricted to 10 credit images. And input data don't have to be sentenced, just options are enough for me (e.g. style: modern, type: living, furnitures: [TV: wide, Curtain: white, Window: 3]). So I decided to google pre-trained model of interior design generator, and finally gave up. I would like to build a tensorflow (or keras) model that acts just like `image.computer`. Please let me find a model or build a model. Any support or help would be grateful.
How to build an image generation model for interior room design?
CC BY-SA 4.0
null
2023-02-06T23:49:42.333
2023-02-23T15:54:56.347
2023-02-23T15:54:56.347
144975
144975
[ "machine-learning", "neural-network", "tensorflow", "cnn", "gan" ]
As I commented above, I found out the relevant github repository and tested on my laptop. The repository was [https://github.com/pixray/pixray.git](https://github.com/pixray/pixray.git) I tested with the following text. "a white large bedroom with a wide sofa and gray curtain" And the result was as follows: [](https://i.stack.imgur.com/pblpK.png)
Using Generative Adversarial Networks for a generation of image layer
In terms of generating an image "layer", that is just the same as generating an output image that can be overlaid on the input using standard graphics software. If you want pixel-level accuracy in the output then the output will need to be the same size as the input, otherwise it could be smaller, provided it is the same aspect ratio, and in which case it would need to be scaled up in order to be used as an overlay. In any case, as the output of a GAN can be an image (and often is), then this part is easy. The "G" in GAN stands for Generative. The purpose of a generative network is to create samples from a population, where there are typically many possibilities. Those samples can be conditioned on some additional data, and that additional data could be an image, although many examples will simpler conditioning, such as a category that the training output is representative of. One possibility for using a GAN, is where your population contains a range of traits, and you can calculate vectors that control that trait. So you can take an input image, reconstruct it in the GAN then modify it by adding/subtracting the trait-related vector. An interesting example of this is [Face Aging With Conditional Generative Adversarial Networks](https://arxiv.org/pdf/1702.01983.pdf), and similar examples are around of adding/removing glasses etc. For this to work for you, you would literally need images that had your points-of interest in them and ones without them, and then you would be able to control addition/removal of points-of-interest. The network would not detect these points in the input, instead it would add them into the output. From reading your question this does not seem to be what you want. A [similar paper uses a GAN to remove rain from photos, based on training many images with and without rain](https://arxiv.org/abs/1701.05957) then learning the "rain vector", encoding new images with rain in them into the GAN's internal representation and subtracting this "rain vector". GANs conditioned on input images (as opposed to categories or internal embeddings) are also possible - this [example of image completion](http://cs231n.stanford.edu/reports2016/209_Report.pdf) might be closer to your goal. If your points of interest are variable with many options feasible, then it could work for you. However, if your points of interest are supposed to always be the same pixels in each image, then your goal might be better defined by strict ground truth and become more like semantic segmentation, which can be attempted with variations on CNNs, such as [described in this paper by Microsoft](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Dai_Convolutional_Feature_Masking_2015_CVPR_paper.pdf). These are much easier to set up and train than GANs, so if you can reasonably frame your problem as pixel classification from the original image, this is probably the way to go.
118380
1
118382
null
1
27
In this [article](https://frankzliu.com/blog/understanding-neural-network-embeddings), the author creates a graph (at the end of the post) from the embeddings of different words found by transformer model. I would like to do a similar thing for a convolutional neural network in order to be able to evaluate clusters. The final objective is to be able to identify similar images in the train set to a given image. I thought about extracting the hidden representation created by one of the hidden layers and reduce the dimensions to 2 using something like PCA. I have some doubts: - Is this strategy sound? - Which layer should I use? Should I use the last one, as when creating a Global Class Activation map? [](https://i.stack.imgur.com/vR9Hj.png)
Visualizing convolutional neural networks embedding
CC BY-SA 4.0
null
2023-02-08T13:57:36.187
2023-02-08T14:50:03.627
null
null
145558
[ "neural-network", "convolutional-neural-network", "visualization", "embeddings" ]
Yes, the approach you propose is sound and applied widely. Instead of PCA, I suggest using [U-MAP](https://umap-learn.readthedocs.io/en/latest/), which will probably yield better results (also better than t-SNE). The representation you may use as input to U-MAP is the output last layer before the projection to the label space dimensionality (e.g. with a 5-class classifier, you would take the vector representation before projection to the 5-dimensional space).
Visualizing deep neural network training
The closes thing I know is [ConvNetJS](http://cs.stanford.edu/people/karpathy/convnetjs/): > ConvNetJS is a Javascript library for training Deep Learning models (mainly Neural Networks) entirely in your browser. Open a tab and you're training. No software requirements, no compilers, no installations, no GPUs, no sweat. Demos on this site plot weighs and how do they change with time (bear in mind, its many parameters, as practical networks do have a lot of neurons). Moreover, if you are not satisfied with their plotting, there is access to networks parameters and you can plot as you wish (since it is JavaScript).
118416
1
118418
null
1
43
I would like to achieve a classification of a text input into predefined categories. From what I have understand unsupervised approach are unfeasible if my target label is something very rare in pretrained models (I have labels about specific industrial processes). Is this true? Otherwise I could try an approach in which I label for example 1000 input texts using all the different labels and use a supervised approach with very few labeled data. Should this help someway the learning process? And what methods could I use in this case?
Topic classification on text data with no/few labels
CC BY-SA 4.0
null
2023-02-09T14:06:14.093
2023-02-09T14:28:15.957
null
null
145759
[ "nlp", "unsupervised-learning", "supervised-learning", "text-classification", "semi-supervised-learning" ]
A feasible approach would be to take a pre-trained model, like BERT, and fine-tune it on a small labeled dataset. For that, you may use Huggingface's Transformers, which makes all the steps in the process relatively easy (see their tutorial on doing exactly that: [https://huggingface.co/docs/transformers/training](https://huggingface.co/docs/transformers/training))
Classifying text documents using linear/incremental topics
If you want these output dimensions to be continuous, simply convert your size and relevance metrics to real-valued targets. Then you can perform [regression](https://en.wikipedia.org/wiki/Regression_analysis) instead of classification, using any of a variety of models. You could even attempt to train a multi target neural net to predict all of these outputs at once. Additionally, you might consider first using a [topic model](https://en.wikipedia.org/wiki/Topic_model) such as [LDA](https://en.wikipedia.org/wiki/Latent_Dirichlet_allocation) as your feature space. Based on the values, it sounds like the "relevance" might be a variable best captured by techniques from [sentiment analysis](https://en.wikipedia.org/wiki/Sentiment_analysis).
118417
1
118443
null
0
213
How can I train a question-answering ML model with a custom dataset? I have gathered nearly 110GB of text data, containing documentation manuals for software products and I am looking into different ML algorithms for question-answering. The problem is that I don't know how to process these files to make a dataset that will be later used to train the model. There are no labels or anything in the data.
Train question answering model with custom dataset
CC BY-SA 4.0
null
2023-02-09T14:25:06.160
2023-02-10T18:07:16.357
null
null
116159
[ "python", "transformer", "question-answering", "chatbot" ]
Question-answering models require training data where you explicitly have a question and the answer to it. If you don't have such data, then you should be looking into different types of models, probably retrieval-based chatbot techniques. One option to create a QA dataset would be to use large language models, either as a service like GPT-3 or on-premise using BLOOM or something alike, to create questions and answers from your data, and then train a QA model on that. Also, depending on what you want exactly as a result, you may use a large language model to generate text to use as the base for document retrieval, as they do in the article [Generate rather than Retrieve: Large Language Models are Strong Context Generators](https://arxiv.org/abs/2209.10063)(published at ICLR'23)
Models that are good for long answer generation given context and question and what datasets would be the best for training?
After the comment from Nicolas Martin, I found [gpt2 for qa pair generation](https://discuss.huggingface.co/t/gpt2-for-qa-pair-generation/759/4) which gave reasonable steps on how to utilize Question and Answering for GPT models and then I can specify min_length and max_length to create a long answer.
118440
1
118482
null
2
57
My problem here is that I want to predict failures in advance with respect to their occurrence. I have sensors mounted on my machine and with a certain frequency, they send data to my database. Sometimes the machine fails and I want to find some anomaly patterns in the data before the actual failure. The idea is that if I notice in data some weird behavior I can stop the machine and do some maintenance to avoid its failure. The only thing I have as a label for a timestamp is when the system is down because of the failure. So for each timestamp, I only know if my machine either is working or not because of the failure. I don't know if before the failure I have a sequence of timestamps in which sensors are normal or not. What type of algorithm would you use in this case? I know the problem is not pretty standard. Below I leave my current idea. Currently, I am thinking about using different LSTMs Autoencoders. I would like to represent in a compressed feature vector the representation of my "normal" input, and with the reconstruction error of the autoencoder, I can understand whether the input fed behaves normally or not. The problem is that I am not sure if my input is normal or not, so here one doubt is that using subsequences that happened very far from the failure could simulate my normal behavior. Then another doubt is that I would use different autoencoders because I want to model different time periods of the sequence (minutes, hours, days), and then ensemble these different time-window encoders. What do you think? Could this be feasible?
Early anomaly detection / Failure prediction on time series
CC BY-SA 4.0
null
2023-02-10T17:19:11.350
2023-02-13T23:33:25.370
null
null
145759
[ "time-series", "lstm", "predictive-modeling", "anomaly-detection", "autoencoder" ]
This is a standard problem in the field of Predictive Maintenance, and there are several ways to model it. The key question is whether there is predictive information present in the data-stream at all. Not all failures have leading indicators. And the datais not always of appropriate type or quality to capture such leading indicators. If you have a labeled dataset, then one can attempt to judge. Using domain knowledge to build a Failure Mode Effect Analysis (FMEA) where one also adds "leading indicators" to each failure mode can also be very useful. It can also help answer "which data would be appropriate". Here are some of the approaches one could take: - Time-to-Failure prediction. Supervised learning, regression. X: time-window of data preceding a failure, possibly long ahead of the failure. Y: time until failure. - Imminent failure prediction. Supervised learning, classification. X: time-window of data some short time before failure. Target: whether a failure occured in the next time-frame or not. - Anomaly Detection. Unsupervised learning. Model the "normal" behavior, and treat all deviations from normal as abnormal - a potensial failure. Use model design and hyperparameter tuning to try tune behavior to indicate the leading indicators.
Anomaly detection on time series
For understanding the seasonality of time series data I would start with Holt-Winters Method or ARIMA. Understanding these algorithms will help with understand how time series forecasting works. [Time series forecasting](https://www.analyticsvidhya.com/blog/2018/02/time-series-forecasting-methods/) For unsupervised classification, I would start with something like k-means clustering for anomaly detection. [Anomaly Detection with K-Means Clustering](http://amid.fish/anomaly-detection-with-k-means-clustering) These links should be a good starting point, I hope this helps.
118447
1
118448
null
1
54
I want to start off by acknowledging that this may be a dumb-sounding question to someone with more machine learning experience to me, so please go easy. Here is the background. I am currently an undergraduate assisting with research/development on a ML-driven robotic chemistry system for synthesizing silver nanocrystals. Currently we have a huge, very old, and undocumented Python codebase that manages the robot and implements various ML techniques for predicting properties of nanocrystals given the concentrations of the reagents used for their synthesis. Right now we are using multiple polynomial regression to predict the maximum absorbance wavelength of a nanocrystal using a set of varying reagent concentrations as input attributes. We want to develop a new functionality to make this prediction in the reverse direction, or, given a target maximum absorbance wavelength we want to predict the reagent concentrations which should be used in the reaction. Because of the messy and delicate state of the code and some time constraints, we can't do the easy solution of making a new multi-target model. Here is a sample of the code we're using to implement to regression: ``` from sklearn.preprocessing import PolynomialFeatures from sklearn.linear_model import LinearRegression poly_regressor = PolynomialFeatures(degree=6) X_poly = poly_regressor.fit_transform(X) poly_reg_model = LinearRegression() poly_reg_model.fit(X_poly, y) ``` Here X is usually 2-3 features depending on the experiment. The degree is always fixed at 6. Is there any was I could use the fitted `poly_reg_model` or any parameter data stored within it to make a sort of "reverse model" that would predict values of `X` given `y`? I know that this is a messy and possibly bad approach but my hand is being forced. Thanks!
Can I use a fitted polynomial regression to make reverse predictions?
CC BY-SA 4.0
null
2023-02-10T23:57:53.423
2023-02-11T00:51:54.587
null
null
145822
[ "python", "scikit-learn", "regression" ]
If I'm understanding you correctly, you take an input x which given a 6th degree Polynomial function f gives you an output y, where your function f is the best fit onto some training data? Polynomials are in general not [injective functions](https://en.wikipedia.org/wiki/Injective_function) which means that they are not reversible. Think y=x^2 where 4 could come from both x=+2 and x=-2. However, if you look at the [Fundamental theorem of algebra](https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra) you will see that your polynomial will have at least one, more likely six (details in the article) roots. For your specific problem for each point of y you will likely be able to find multiple candidates of x that solve for this specific point. Finding the actual values of x that are roots to your polynomial can be done numerically.
Using Linear Regression to Learn Polynomial Regression
It is quite simple to understand (and to implement using matrices). Consider a specific example (to generalise later). You have a polynomial function of a single feature $x$): $$ f(x) = \omega_0 x^0 + \omega_1 x^1 + \ldots \omega_n x^n $$ You can organise coefficients and features in vectors and get $f$ by a scalar product: $$ \mathbf{\omega} = \begin{pmatrix} \omega_0, \\ \vdots \\ \omega_n \end{pmatrix}, \qquad \mathbf{x} = \begin{pmatrix} 1, \\ x \\ x^2 \\ \vdots \\ x^n \end{pmatrix}$$ Hence $$ f(x) = \omega^T\mathbf{x}$$. This is nothing else than a multi-feature linear regression where the $i$-th feature is now the $i$-th power of $x$. In numpy, imagine you have an array of data `x`. To create the vector $\mathbf{x}$ above, you can do (for $n=3$, for instance) ``` X = np.ones((len(x), 4)) X[:,1] = x X[:,2] = np.power(x,2) X[:,3] = np.power(x,3) ``` And then using sklearn `LinearRegression`, ``` model = LinearRegression() model.fit(X, y) ``` UPDATE: In `sklearn` has been recently introduced [PolynomialFeatures](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.PolynomialFeatures.html) that precisely performs the transformation I described in numpy (you asked in numpy, but this might be useful as well).
118481
1
118491
null
0
56
I am working on sequence-to-sequence tasks where the input is an `n`-length sequence of discrete values from a finite set S (say `{x | x is a non-negative integer less than 10}`). An example input sequence of length 5 is: `1 8 3 5 2`. The output is supposed to be some length preserving transformation of the input sequence (say reverse, shift, etc.). To be explicit, the tokens of the output sequence also come from the same set as the input sequence. For example, for the input sequence above, the reverse transformation produces the output sequence: `2 5 3 8 1`. I want the model to predict the output tokens exactly, so the task is closer to classification than regression. However, since the output is a sequence, we need to predict multiple classes (as many as the input length) for each input sequence. I searched for references but could not find a similar setting. Please link some suitable references you are aware of that may be helpful. I have the following questions for my use case: - What changes are needed such that the model works on discrete sequences as defined above? - What loss function would be appropriate for my use case? For 1), one might change the input sequence such that each token is replaced by an embedding vector (learned or fixed) and input that to the model. For the prediction, I was thinking of ensuring that the model produces a `n x k` length output (`n` = sequence length; `k` = `|S|` or the vocab size) and then using each of these `n` vectors to make a class prediction (from `k` classes). For 2), the loss could be a sum of `n` cross-entropy losses corresponding to the `n` classifications. Please help me with better answers to these two questions. Thank you. Edit: My setup is encoder-only (non-autoregressive prediction). Please account for this while answering the questions by suggesting approaches that are in line with the setup, if possible.
What loss function to use for predicting discrete output sequence given a discrete input sequence?
CC BY-SA 4.0
null
2023-02-12T20:56:14.993
2023-02-13T14:11:54.427
2023-02-13T14:11:54.427
145893
145893
[ "nlp", "regression", "multiclass-classification", "transformer", "sequence-to-sequence" ]
This is exactly the setting of neural machine translation. The typical architecture used nowadays for that is the [Transformer model](https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf). It receives a discrete series of tokens, converts them to continuous vectors with an embedding layer, uses an encode-decoder structure and generates probabilities over the token space. The loss used for discrete outputs is categorical cross-entropy. You may also look into different decoding (i.e. token generation) strategies, like greedy decoding and beam search. You can find implementations, tutorials, and a vibrant community in the [HuggingFace Transformers library](https://huggingface.co/docs/transformers/index).
Predict output sequence one at a time with feedback
After some more research, I believe this can be solved with a straightforward application of encoder-decoder networks. In [this](https://blog.keras.io/a-ten-minute-introduction-to-sequence-to-sequence-learning-in-keras.html) tutorial, we can simply replace `sampled_token_index` and `sampled_char` with `actual_token_index` and `actual_char`, computed according. And of course in our case it's `actual_word`. To summarize we divide our training set into input/output pairs, where output examples begin with a `<START>` token and end with `<STOP>`, and train a sequence to sequence model on these pairs, as described in the tutorial. Then at inference time we feed the `<START>` token to the model to predict the next word. Then after we receive the actual next word, we feed the actual (observed) output so far into the model, and so on. [](https://i.stack.imgur.com/YFIa8.png) The other answers had some interesting ideas, but unfortunately they didn't really address how to deal with variable length inputs and outputs. I believe seq2seq models based on RNNs are the best way to address this.
118492
1
118493
null
0
48
As the toronto book corpus is no longer available (or rather, only in lowercase), I am looking for an alternative dataset of comparable language variety, quality, and size. Any suggestions? The Gutenberg Standardized Corpus is too big and still requires lots of preprocessing.
Alternatives to Toronto Book Corpus
CC BY-SA 4.0
null
2023-02-13T11:46:08.543
2023-02-13T15:04:09.533
null
null
141192
[ "nlp", "dataset" ]
First, for context, I suggest other bypasses check [the writeup of a researcher who also tried to find the Toronto Book Corpus](https://gist.github.com/alvations/4d2278e5a5fbcf2e07f49315c4ec1110). There is a potential copy of the corpus shared by Igor Brigadir [on Twitter](https://twitter.com/IgorBrigadir/status/1095075607178870786) , although it is not certain that it is the same exact corpus (see [discussion](https://github.com/soskek/bookcorpus/issues/24#issuecomment-556024973)). HuggingFace datasets hosts [a copy of this corpus](https://huggingface.co/datasets/bookcorpus). As you noted, this version is in lowercase. There are other people who have replicated the corpus to some degree, like Shawn Presser, who shared it [on Twitter](https://twitter.com/theshawwn/status/1301852133319294976) ([download link](https://battle.shawwn.com/sdb/books1/books1.tar.gz)). [Here](https://github.com/soskek/bookcorpus) is some context for this replication and more info on the matter. This replication is NOT in lowercase. Also, [here](https://github.com/sgraaf/Replicate-Toronto-BookCorpus/blob/main/README.md) you can find the instructions and code to replicate it yourself. Finally, there is [this paper](https://openreview.net/forum?id=Qd_eU1wvJeu) studying the problems of the Toronto Book Corpus and its replications.
Are there any popular English corpus?
Finding corpora for NLP research can be hit and miss, my advice would be to study the availability of adequate data when deciding about the research direction, not afterwards. Of course this completely depends on the type of requirement for the data. In case you have to create your own corpus, design the corpus collection and annotation very carefully because papers with weaknesses in the data collection can be rejected (at least in selective venues). There's no particular problem about collecting text data from the web, as long as this can be justified (for instance social media is not a good source for grammatically correct sentences ;) ). Honestly I'm not aware of any simple way to find corpora. Here are some sources: - The Linguistic Data Consortium has a catalog of corpora, some free and some not. - ELDA also has a catalog, it's also a semi-commercial provider. - The LRE Map is a repository (also by ELDA) for people to register their research data and software. A major source of quality data are the various shared tasks which are often organized jointly with major conferences. It's very task specific though. For the rest it's often about following specific parts of the domain, for example if you find papers related to your task of interest check where the authors found their data, whether they make some data available on their webpage, etc. For phrasal verbs the [PARSEME Shared Task corpora](https://parsemefr.lis-lab.fr/parseme-st-guidelines/1.2/) might suit your needs.
118532
1
118613
null
0
140
I checked many posts to figure out how random forest (RF) learning algorithm (an ensemble of many decision trees (DT) constructed by [Rain forest algorithm](https://datascience.stackexchange.com/a/31959/69263)) within bagging select split points at each leaf. There are some close questions which are have not been answered in this matter: [ref1](https://stackoverflow.com/questions/75044832/how-do-the-splits-in-a-decision-tree-classifier-work), [ref2](https://stats.stackexchange.com/q/490185/240550). I know that some python package use [Splitter](https://stats.stackexchange.com/a/397482/240550) implementation, probably based on [Gini impurity](https://datascience.stackexchange.com/questions/89952/the-notation-of-splitslabel-under-random-forest) as they asked [here](https://stats.stackexchange.com/q/539999/240550) for how it works as well as its [reason](https://datascience.stackexchange.com/q/89455/69263) to use in DT. For example, [sklearn](/questions/tagged/sklearn) package uses CART algorithm to build RF [Ref1](https://datascience.stackexchange.com/a/68164/69263) [Ref2](https://datascience.stackexchange.com/a/109238/69263). Also, I found below posts without further explanation about how split points are selected as criteria over dataset exactly: > "... Distance is not a factor with trees - what matters is whether the value is greater than or less than the split point, not the distance from the split point." ref > ... unlike linear/logistic regression, RF doesn't work by distance (they work with finding good split for your features), so NO NEED for One-Hot Encoding. ref Considering an example for [continuous target variables](https://datascience.stackexchange.com/a/52912/69263) through dataframe, once I watched carefully [StatQuest: Random Forests Part 1 - Building, Using and Evaluating](https://youtu.be/J4Wdy0Wc_xQ), and the best I could find with a true example is this [video](https://youtu.be/v6VJ2RO66Ag): Knowing that the decision trees in the random forest are trained on different subsets of the training data ([using random features ("feature bagging")](https://datascience.stackexchange.com/q/20304/69263)), I used and developed the [bootstrapped tables or sub-dataframe (resampled but with replacement)](https://datascience.stackexchange.com/a/61908/69263) in the explanation for 1st DT for better understanding and find relationships in the video example: ![img](https://i.imgur.com/knwuUUb.png) Here I see 1st DT just both randomly selected variables from data: ``` x1 =< 4.9 #selected from `id=0` only x0 =< 4.3 #selected from `id=0` only ``` So how they selected from `id=0` only? I follow other bootstrapped tables: ![img](https://i.imgur.com/Ky6rGVJ.png) Then I see 2nd DT just both randomly selected variables from data: ``` x3 =< 4.6 #selected from `id=4` only # No x2 ?? ``` So how they selected just from `id=4` only? Is it a random `id`? I follow other bootstrapped tables: ![img](https://i.imgur.com/ur0Mbl4.jpg) Then I see 3rd DT, just both randomly selected variables from the data: ``` x2 =< 4.1 #selected from `id=0` or `id=2` not clear? x2 =< 4.1 #selected from `id=4` ``` Here is not clear to me why twice `x2` was involved in the 3rd tree without involving variable `x4`? Finally, I see the fourth DT in the video, from both randomly selected variables from the data: ![img](https://i.imgur.com/NHZVN9t.png) Then I see 3rd DT, just both randomly selected variables from the data: ``` x1 =< 4.4 #selected from `id=3` x2 =< 6.1 #selected from `id=1` ``` So Qs: - Based on which criteria split points are taken/selected for randomly selected pair variables? - Again, here twice, x1 was involved in the 4th tree without involving variable x1? Does it make sense? - I'm unsure what I marked based on my finding; explain the "criteria" of x > y, and if so, how it works. Sometimes both randomly selected pair variables are involved in DT; sometimes not. - I didn't get based on the pick split points y via id=#? Is there any rule, or is it just randomly picked? Can we say it is relative? - Final decision is being made when the end leaf is 0 or 1? --- My pythonic implementation: ``` #Generate data in the video example import pandas as pd d = {'x0' : pd.Series([4.3, 3.9, 2.7, 6.6, 6.5, 2.7], index=[0, 1, 2, 3, 4, 5]), 'x1' : pd.Series([4.9, 6.1, 4.8, 4.4, 2.9, 6.7], index=[0, 1, 2, 3, 4, 5]), 'x2' : pd.Series([4.1, 5.9, 4.1, 4.5, 4.7, 4.2], index=[0, 1, 2, 3, 4, 5]), 'x3' : pd.Series([4.7, 5.5, 5.0, 3.9, 4.6, 5.3], index=[0, 1, 2, 3, 4, 5]), 'x4' : pd.Series([5.5, 5.9, 5.6, 5.9, 6.1, 4.8], index=[0, 1, 2, 3, 4, 5]), 'y' : pd.Series([0.0, 0.0, 0.0, 1.0, 1.0, 1.0], index=[0, 1, 2, 3, 4, 5]), } df = pd.DataFrame(d) print(df) # x0 x1 x2 x3 x4 y #0 4.3 4.9 4.1 4.7 5.5 0.0 #1 3.9 6.1 5.9 5.5 5.9 0.0 #2 2.7 4.8 4.1 5.0 5.6 0.0 #3 6.6 4.4 4.5 3.9 5.9 1.0 #4 6.5 2.9 4.7 4.6 6.1 1.0 #5 2.7 6.7 4.2 5.3 4.8 1.0 #Create RF model import numpy as np import pandas as pd import matplotlib.pyplot as plt plt.style.use('ggplot') %matplotlib inline import random from pprint import pprint import pdb from sklearn import tree from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier #Create and fit RF model with 4 trees rf = RandomForestClassifier(n_estimators=4, random_state=0) rf.fit(df.iloc[:,0:5], df.iloc[:,-1]) #len(rf.estimators_) #4 ``` ``` #Plot the trees for monitoring split points in 1/4 of trees (1st tree) tree.plot_tree(rf.estimators_[0]) ``` ![img](https://i.imgur.com/TqBlfOF.jpg) ``` #Plot the trees for monitoring split points in 2/4 of trees (2nd tree) tree.plot_tree(rf.estimators_[1]) ``` ![img](https://i.imgur.com/ttiJL5Z.jpg) ``` #Plot the trees for monitoring split points in 3/4 of trees (3rd tree) tree.plot_tree(rf.estimators_[2]) ``` ![img](https://i.imgur.com/jyHXOx0.jpg) ``` #Plot the trees for monitoring split points in 3/4 of trees (4th tree) tree.plot_tree(rf.estimators_[3]) ``` ![img](https://i.imgur.com/tnXKcHb.jpg)
How do the splits points in a decision tree within Random Forest are taken/selected? (Base on which criteria?)
CC BY-SA 4.0
null
2023-02-14T23:34:25.253
2023-02-19T10:11:50.500
2023-02-19T08:32:01.460
69263
69263
[ "machine-learning", "random-forest", "decision-trees", "cart" ]
You're asking multiple questions, so I will try to answer them all, and then give a piece of code that I used for exploration. First, here is a summary of how a [RandomForestClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html) works: - it creates subdataset from the original, using random subsampling of samples and features. - it fits DecisionTreeClassifier based on each subdataset. to select the best threshold from the subfeatures that leads to the best split, multiple thresholds are evaluated using a criteria. See the first answer for details. - perform a majority vote or an average prediction based on all decision trees prediction Based on which criteria split points are taken/selected for randomly selected pair variables? For every feature available for a node1, separation criteria are computed for multiple thresholds. They are multiple possible implementations for them: - use all data values as thresholds (which seem to be used in the video) - use the middle points of all successive data values (used by sklearn here) So using your data and [sklearn](/questions/tagged/sklearn) convention, if `x1` is tested with all samples, the following thresholds are evaluated: `[3.65, 4.6, 4.85, 5.5, 6.4]`. When it is evaluated, separation criteria are computed as explained in this [mathematical formulation](https://scikit-learn.org/stable/modules/tree.html#mathematical-formulation). Then the threshold with the best separation criteria is selected. See this [very nice video about decision trees](https://www.youtube.com/watch?v=_L39rN6gz7Y) and how ther are built. (9:57 for continuous data + gini impurity computation) 1 So it doesn't compare multiple features. It is not relative. If you need such behaviour, you could add `x-y` as an input feature `plot_tree` interpretation (EDIT) `value=[2,4]` in `plot_tree` means that you have currently 6 samples1. 2 from class 0 and 4 from class 1. In your `rf.estimators_[3]`, this dataset has been splitted with feature 0 because it has been evaluated as the best split (minimum impurity with `gini=0.444`). 3 samples from class 1 went to the right leaf, and the others went to the left node before being splitted again by feature 3. 1 [The number of values doesn't sum to the number of samples](https://stackoverflow.com/questions/56103507/why-does-this-decision-trees-values-at-each-step-not-sum-to-the-number-of-sampl) because of bootstraping. Again, here twice, x1 was involved in the 4th tree without involving variable x1? Does it make sense? > I didn't get based on the pick split points y via id=#? Is there any rule, or is it just randomly picked? Can we say it is relative? I think, if I understand correctly, that you're confusing the build from the video and the one from [sklearn](/questions/tagged/sklearn). They are different because of the random subsampling, both on samples and features. To generate new samples to be used for DecisionTreeClassifier, see this [sklearn _generate_sample_indices() function](https://github.com/scikit-learn/scikit-learn/blob/8c9c1f27b7e21201cfffb118934999025fd50cca/sklearn/ensemble/_forest.py#L123). In the video, samples id are not the same. So, `x1` is used in the 4th tree of the video, but not in the 4th tree in your code. I'm unsure what I marked based on my finding; explain the "criteria" of x > y, and if so, how it works. Sometimes both randomly selected pair variables are involved in DT; sometimes not. Multiple criteria can be used. In [sklearn](/questions/tagged/sklearn), 3 are possible: - gini (default) - entropy - log_loss Here are their [mathematical formulation](https://scikit-learn.org/stable/modules/tree.html#classification-criteria). At each node, a lot of thresholds are evaluated and criteria computed. For example, in the video, for the 1st tree in the 1st node, the following subdataset is used: [](https://i.stack.imgur.com/r3t05.png) For `x0`, they are 3 different values, so 3 thresholds are evaluated. For `x1`, 4. That makes 7 entropy (or gini) criteria values. The best is then selected. Final decision is being made when the end leaf is 0 or 1? Once again, they are multiple implementations. The [original paper](https://www.stat.berkeley.edu/%7Ebreiman/randomforest2001.pdf) used a majority vote, but as mentioned in the [sklearn doc](https://scikit-learn.org/stable/modules/ensemble.html#random-forests): > In contrast to the original publication [B2001], the scikit-learn implementation combines classifiers by averaging their probabilistic prediction, instead of letting each classifier vote for a single class. I hope it helps. SAMPLE CODE Here is a sample code that you can execute with your code, and that shows the 4 `DecisionTreeClassifier()`s build from the subdatasets. Note that I did not find an explicit function to extract the sub-feature indexes. Note that if you plot the trees, they are all similar to the ones built from `RandomForestClassifier()` except for the 3rd, but its first node has the same best gini separation. So the difference comes from a different order somewhere; I did not figure out where... ``` from sklearn.ensemble._forest import _generate_sample_indices sub_features_indexes = [ [0, 1], # pair features (x0 , x1) based on video tutorial [2, 3], # pair features (x2 , x3) based on video tutorial [2, 4], # pair features (x2 , x4) based on video tutorial [1, 3] # pair features (x1 , x3) based on video tutorial ] for i in range(4): seed = rf.estimators_[i].random_state boostraped_samples = df.loc[_generate_sample_indices(seed, 6, 6)].sort_index() #sliced frame based on video tutorial boostraped_samples_dfslice = boostraped_samples[boostraped_samples.columns[[sub_features_indexes[i][0],sub_features_indexes[i][1], -1 ]]] display(boostraped_samples_dfslice) dtc = DecisionTreeClassifier(random_state=seed) dtc.fit(boostraped_samples.iloc[:,sub_features_indexes[i]], boostraped_samples.iloc[:,-1]) tree.plot_tree(dtc) plt.show() ``` EDIT > So how they selected just from id=4 only? Is it a random id? When evaluating the value of id=4, so 4.6, as a threshold, the labels are perfectly separated, so there is no need to go further. But, that doesn't mean `x2` was not evaluated. It was, but it doesn't separate the labels better than choosing `4.6` as the threshold on feature `x3`
How is a splitting point chosen for continuous variables in decision trees?
In order to come up with a split point, the values are sorted, and the mid-points between adjacent values are evaluated in terms of some metric, usually information gain or gini impurity. For your example, lets say we have four examples and the values of the age variable are $(20, 29, 40, 50)$. The midpoints between the values $(24.5, 34.5, 45)$ are evaluated, and whichever split gives the best information gain (or whatever metric you're using) on the training data is used. You can save some computation time by only checking split points that lie between examples of different classes, because only these splits can be optimal for information gain.
118561
1
118566
null
1
22
I have trained a deep nn model based on some existing data. In the meantime, I have collected more data and label them so that I can feed it to the model to improve its performance. The questions is, should I feed: Option 1- New data to the already trained model? Option 2- New data to a new model with initialized weights? Option 3- The entire data (old+new) to a model with initialized weights? Which method should I choose? Do I have to use only the new data or I have to combine the new data to the old dataset. I asked this question so that I can choose the option which can improve the overall accuracy and to consider the data drift in the new data.
How train a pre-trained model based on new dataset?
CC BY-SA 4.0
null
2023-02-16T09:59:27.517
2023-02-16T13:08:30.593
2023-02-16T13:08:30.593
146021
108053
[ "deep-learning", "gradient-descent", "data-drift" ]
You need to consider few things while trying to train a pre-trained model over a new set of data. - New data may or may not represent the data which the model is initially trained upon. There might be some data drift. So it is important to include the previously used data along with the used data to retrain in-order to make proper predictions. Using only the new data might result in Overfitting or bias. If the new data is large enough and well balanced with less data drift, then it is ok to use only the new data - About the pre-trained weights. Your weights will eventually change once you start re-training the model. So re-training your already trained model or using weights from your previous mode onto a new one is basically the same thing but it is much more efficient than building a new model from scratch with no pre-trained weights. - Data drift. If the data drift is too significant and it replaces the actual purpose of your previous model then train the model from scratch. If you are trying to achieve something from entirely new data which cannot be done with your old data, you can train from scratch otherwise you can use the weights/re-train the model Also don't forget that the quality of data must be in par with the old data and there should be no structural changes to it. If you want to know more about data drift, check out [this post](https://datascience.stackexchange.com/questions/114415/how-to-combat-data-drift)
Train new data to pre-trained model
I cannot comment yet. If you just load the model and use a fit method it will update the weights, not reinstance all the weights. It will just perform a number of weights update that you can chose, using the new data.
118579
1
118588
null
0
490
I am trying to handle data coming from software that has as a terrible format for a time duration: `[days-]hours:minutes:seconds[.microseconds]`. In case someone else has already traveled this path, I'm fighting with the `Elapsed` and field from [SLURM's sacct output](https://slurm.schedmd.com/sacct.html). (I'm also fighting with `ReqMem` but that's a problem for another day) For example, one row might read `02:42:05` meaning 2 hours, 42 minutes, 5 seconds. Another row might read `6-02:42:05` which means the same, plus 6 days. Finally, on occasion, the seconds value has a microseconds value following it delimited by a decimal point, for example `6-02:42:05.745` meaning the same as the prior, plus 745 microseconds. Note that both the day and microsecond fields (and their delimiters) are optional and thus inconsistently present, and I can't just treat seconds as a float. I need to replace this value with an integer number of seconds, or something suitably equivalent. I have managed to muddle my way to a solution utilizing `apply()` and a python function, but I'm aware that this essentially breaks most of the benefit of using Polars? It's faster than the original pandas implementation it seems, but I would love to get as much as I can out of this work. The DataFrame geometry on this dataset is something like 109 columns and over 1 million rows, before filtering. Here's my working but terrible code: ``` # this is going to be called hundreds of thousands of times, so yea, compiling it is probably helpful elapsed_time_re = re.compile(r'([0-9]+-)?([0-9]{2}):([0-9]{2}):([0-9.]{2})(\.[0-9]+)?') def get_elapsed_seconds(data): match = elapsed_time_re.match(data) if match is None: return data groups = match.groups('0') days = int(groups[0].strip('-')) hours = int(groups[1]) minutes = int(groups[2]) seconds = int(groups[3]) microseconds = int(groups[4].strip('.')) if microseconds > 0: seconds = seconds + round(microseconds / 1e6) return seconds + (minutes * 60) + (hours * 3600) + (days * 86400) # df is a polars.LazyFrame df = df.with_columns(pl.col('Elapsed').apply(get_elapsed_seconds)) ``` I have a thought on how to proceed, but I can't just find my way there: - using expression conditionals, concatenate the string literal '0-' to the front of the existing value if it doesn't contain a '-' already. Problem: I can only find how to concatenate dataframes or series data, no matter how I phrase the search, and not how to concatenate a literal string to a column (of dtype str) existing value - parse this new string with strptime(). Problem 1: chrono::format::strftime has no format specifier for microseconds (only nanoseconds), but this part of the timestamp is not useful to me and could be dropped - but how? Problem 2: that'll give me a Datetime, but I don't know how to go from that to a Duration. I think if I create a second Datetime object from 0000-00-00 00:00:00 or similar and perform an addition between the two, I'd get a Duration object of the correct time? --- For some context: I'm just getting started with Polars. I have almost no prior experience with writing Pandas, and can read it only with constantly looking things up. As such, examples/explanations using Pandas (or comparisons to) won't save me. I am aware that you can perform some amount of logic with Polars expressions, but it remains opaque to me. One roadblock is that the lambda syntax most examples seem to include is very difficult for me to parse, and even once past that I'm not understanding how one would branch within such expressions.
polars: parsing a funky datetime format with optional fields
CC BY-SA 4.0
null
2023-02-16T23:38:46.250
2023-02-17T18:18:13.050
2023-02-17T18:18:13.050
146055
146055
[ "python", "python-polars" ]
I am not too familiar with `polars` myself, but have you tried using the [str.extract method](https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.Expr.str.extract.html)? You can use this to extract the days/hours/etc. from the input on the whole column without using `apply`. I tested it using the three examples you give and it gives the expected result, and based on 100.000 rows it is roughly four times faster than using `apply`, with the relative speed likely improving more as you are increasing the number of rows. ``` import polars as pl df = pl.DataFrame({ "elapsed": ["02:42:05", "6-02:42:05", "6-02:42:05.745"] }) ( df # extract days/hours/minutes/seconds/microseconds and cast to ints .with_columns([ pl.col("elapsed").str.extract(r"(\d+)-", 1).cast(pl.UInt32).alias("days"), pl.col("elapsed").str.extract(r"(\d{2}):\d{2}:\d{2}", 1).cast(pl.UInt32).alias("hours"), pl.col("elapsed").str.extract(r"\d{2}:(\d{2}):\d{2}", 1).cast(pl.UInt32).alias("minutes"), pl.col("elapsed").str.extract(r"\d{2}:\d{2}:(\d{2})", 1).cast(pl.UInt32).alias("seconds"), pl.col("elapsed").str.extract(r"\.(\d+)", 1).cast(pl.UInt32).alias("microseconds"), ]) # calculate the number of seconds elapsed .with_columns([ ( pl.col("seconds").fill_null(0) + (pl.col("microseconds").fill_null(0) / 1e6) + (pl.col("minutes").fill_null(0) * 60) + (pl.col("hours").fill_null(0) * 3600) + (pl.col("days").fill_null(0) * 86400) ).alias("result") ]) ) ``` Which gives the following dataframe: |elapsed |days |hours |minutes |seconds |microseconds |result | |-------|----|-----|-------|-------|------------|------| |02:42:05 |0 |2 |42 |5 |0 |9725 | |6-02:42:05 |6 |2 |42 |5 |0 |528125 | |6-02:42:05.745 |6 |2 |42 |5 |745 |528125 |
pl.datetime plots as days since epoch or 1970, if formatted - polars and matplotlib
I managed to solve this myself today. I don't understand why, but explicitly passing the datetime series and value series to matplotlib, instead of the dataframe (or numpy array) directly, does the job. ``` ## relevant imports from earlier in the project # #import polars as pl #import matplotlib #import matplotlib.pyplot as plt #%matplotlib inline fig = plt.figure(figsize=(12,8), facecolor='white') ax_cpu = fig.add_subplot(111) ax_ram = ax_cpu.twinx() ax_cpu.set_ylim(ymax=cores_max) ax_ram.set_ylim(ymax=memory_max) ax_cpu.set_ylabel('Cores', fontsize=14, color='purple') ax_ram.set_ylabel('RAM (GB)', fontsize=14, color='orange') cpu_axes_time = binned_df_cpu_all.get_column("Start") cpu_axes_value = binned_df_cpu_all.get_column("AllocCPUS") ram_axes_time = binned_df_ram_all.get_column("Start") ram_axes_value = binned_df_ram_all.get_column("ReqMem") ax_cpu.plot(cpu_axes_time, cpu_axes_value, color='purple', alpha=0.8) ax_ram.plot(ram_axes_time, ram_axes_value, color='orange', alpha=0.8) plt.title('Total Cores/RAM Allocation\n{start} through {end}'.format( start=datestamp_min.date(), end=datestamp_max.date())) # optional, rotates them a bit nicely fig.autofmt_xdate() ``` ```
118582
1
118594
null
1
35
I have an optimization problem that I solved with grid search using `hyperopt` in python. In this problem, I have some parameters and a score. I want to find the best parameters that maximize this score. Until now, I didn't see any machine learning algorithms used for solving the optimization task. For example, in classification, we define the problem and use an optimizer (like SGD) to find the best weights. Are there any ML algorithms that can learn how to solve an optimization problem?
Solve optimization problem with machine learning algorithm
CC BY-SA 4.0
null
2023-02-17T10:22:42.467
2023-02-17T14:40:38.177
null
null
146067
[ "machine-learning", "optimization" ]
Optimization is a very broad field. There are many examples of applying machine learning to optimization subfields. One example is "[Learning to Optimize](https://arxiv.org/abs/1606.01885)" which uses reinforcement learning (RL). A particular optimization algorithm is defined as a policy and RL can be used to find the best relative policy.
How to approach a machine learning problem?
Plenty of questions there. I will answer about the accuracy one: 75% is larger than random chance and might have use. But you need to consider what is relevant for your application. For example, suppose you are dealing with a security issue. Denying access to someone that is entitled to it is less damaging than allowing access to someone who is not. If you want to reduce the number of phone calls to sell a product you want a model that will tell you to call the maximum of potential clients and even a shrinking of 10% of useless call is a good model with good profit if you don't make it absurdly expensive to keep operating.
118590
1
118591
null
0
32
I have this random forest model setup as shown below in python. It's performing unexpectedly well with a ~70% classification success rate (to the extent where I really doubt it is genuine) and I am therefore skeptical that I haven't accidentally fed it some training data - but I can't find any evidence of this. So, I have two questions: - Have I made an error somewhere in this model? - How can I be more certain that I have not accidentally made the model predict on some training data? Code: ``` ################################## ###### Set up forest model ####### ################################## X_train, X_test, y_train, y_test = sklearn.model_selection.train_test_split(X_concatenated, y, test_size=0.2) print(f'X_train len: {X_train}') print(f'X_test len: {X_test}') print(f'y_train len: {y_train}') print(f'y_test len: {y_test}') # Define the model rfmodel = RandomForestClassifier(n_estimators=100) # Define the hyperparameters to optimize param_distributions = { 'n_estimators': randint(10, 1000), 'max_depth': randint(2, 50), 'min_samples_split': randint(2, 10), 'min_samples_leaf': randint(1, 10), 'max_features': ['sqrt', 'log2'], 'criterion': ['gini', 'entropy'], 'bootstrap': [True, False], 'class_weight': [None, 'balanced', 'balanced_subsample'] } # Define the search strategy search = RandomizedSearchCV( rfmodel, param_distributions=param_distributions, n_iter=optruncount, cv=5, random_state=42, n_jobs=-1 ) # Train the model with hyperparameter optimization search.fit(X_train, y_train) # Get the best hyperparameters best_params = search.best_params_ # Train the final model with the best hyperparameters rfmodel = RandomForestClassifier(**best_params) rfmodel.fit(X_concatenated, y) ################################ ###### Test forest model ####### ################################ predicted = rfmodel.predict(X_test) cm = confusion_matrix(y_test, predicted) sns.heatmap(cm, annot=True, cmap='Blues', fmt='g') plt.xlabel('Predicted') plt.ylabel('Actual') plt.show() ``` ```
How can I be more certain that I have not accidentally made my ML model predict on training data?
CC BY-SA 4.0
null
2023-02-17T13:52:39.077
2023-02-17T14:12:50.160
null
null
146075
[ "machine-learning", "scikit-learn", "random-forest" ]
You are training a random forest with the best hyperparameters you've found on the `(X_concatenated, y)` dataset, then testing it's performance using the `(X_test, y_test)` dataset. However, the second dataset is a subsample of the first one (see your call to `train_test_split`), and as a result, observations from your 'test' set will be present in 'training' set. This is data leakage and the performance metrics on your 'test' set will overestimate the actual performance of your model.
how to adjusting already built ML predictive model
This depends on your model type: Classical using ensemble/stacked models: If you are using classical machine learning, you could use your old model built on the previous 1 million records, and create a new model on the most recent 500k records and then combine the predictions in an ensemble or stacked approach. References for Ensemble and Stacking: [https://machinelearningmastery.com/stacking-ensemble-machine-learning-with-python/](https://machinelearningmastery.com/stacking-ensemble-machine-learning-with-python/) Video Reference: [https://www.youtube.com/watch?v=Un9zObFjBH0](https://www.youtube.com/watch?v=Un9zObFjBH0) AI/NN using transfer learning: If your are using a NN (neural network) model, you can use the idea of transfer learning. Save your model built on the first 1 million records, then add it as an initial layer to a new NN for analyzing the new data. You can then save the new NN and use it in the next round. Reference: [https://machinelearningmastery.com/transfer-learning-for-deep-learning/](https://machinelearningmastery.com/transfer-learning-for-deep-learning/) Video Reference: [https://www.youtube.com/watch?v=yofjFQddwHE](https://www.youtube.com/watch?v=yofjFQddwHE) General guidelines: If you need to do this updating process many times, you can create a new model on n number of records, drop the oldest data/model off once your new dataset reaches a minimum, and predict only on the last x number of models. n and x are adjusted based on your data, flexibility and need for real-time predictions. If the data is changing over time, then it would be better to only use the latest data, or weight the older data lower and the newer data higher. Here is a good definition of transfer learning: "Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task."
118610
1
118703
null
0
78
I have a simple Sequential keras model with 150 Inputs. Some of these are simply OneHotEncoded values. Now I would like to add more options to the OneHotEncoder. As an example: I previously had Blue, Green and Red encoded as binary values for the input and now I want to add Yellow, Orange and Purple as well. The thing is, I would like to preserve the weights in my existing trained model and simply add new inputs with new random weights and continue training on the already established base. How can I do that and ensure the knowledge that my model has already obtained is preserved? If relevant: My model is saved in the .h5 format.
Can I change the number of inputs to a keras model while preserving the trained existing weights
CC BY-SA 4.0
null
2023-02-18T14:03:08.403
2023-02-21T22:09:42.717
null
null
146112
[ "neural-network", "keras", "regression" ]
One way to do this is to create a second model with your new inputs and the same number/size of hidden layers and output layer, then copy the weights from the layers of your first model to the second model using the get_weights() and set_weights() methods of each layer. This is straightforward for all but the first hidden layer, as these layers will have the same number of weights, but a little more complex for the first layer as you need to take the new inputs into account. Here's some sample code using a couple of (untrained) toy models: ``` import tensorflow.keras as keras # model1 is the pre-existing model input_ = keras.Input((150), name='input') x = keras.layers.Dense(8, name='dense1', activation='relu')(input_) x = keras.layers.Dense(8, name='dense2', activation='relu')(x) x = keras.layers.Dense(1, name='final', activation='sigmoid')(x) model1 = keras.Model(inputs=input_, outputs=x, name='model1') # Create model2 as new model with additional inputs input_ = keras.Input((160), name='input') x = keras.layers.Dense(8, name='dense1', activation='relu')(input_) x = keras.layers.Dense(8, name='dense2', activation='relu')(x) x = keras.layers.Dense(1, name='final', activation='sigmoid')(x) model2 = keras.Model(inputs=input_, outputs=x, name='model2') ``` For model2, I've assumed the new inputs are the last ten. I've used the default weight initialization method. You could initialize the model using the method you want to use for the new inputs, so you don't need to worry about these later. ``` # Check number of weights in each layer print('Model 1 layer sizes') for l in model1.layers: print(l.name, [ll.shape for ll in l.get_weights()]) print('\nModel 2 layer sizes') for l in model2.layers: print(l.name, [ll.shape for ll in l.get_weights()]) ``` This shows the models have the following numbers of weights in each layer ``` Model 1 layer sizes input [] dense1 [(150, 8), (8,)] dense2 [(8, 8), (8,)] final [(8, 1), (1,)] Model 2 layer sizes input [] dense1 [(160, 8), (8,)] dense2 [(8, 8), (8,)] final [(8, 1), (1,)] ``` So each layer (apart from the input layer) has two sets of weights, the first set are the weights for the layer inputs and the second set are the biases. Get_weights returns these as numpy arrays, so we can update these as required using numpy, then update the model weights. Now the code to update the weights in model 2. ``` # Copy weights for all layers apart from the input and first hidden layer for l in range(len(model2.layers)): if l >= 2: model2.layers[l].set_weights(model1.layers[l].get_weights()) # Get the weights for the first hidden layer l1 = model1.layers[1].get_weights() l2 = model2.layers[1].get_weights() # Copy biases l2[1] = l1[1] # Copy weights for existing inputs, assume new inputs are last. l2[0][:len(l1[0])] = l1[0] # Set the weights for the first hidden layer model2.layers[1].set_weights(l2) ``` Print the layer weights to check ``` # Check results for l in model1.layers: print(l.name, l.get_weights()) ``` ``` input [] dense1 [array([[-0.01548935, -0.02027901, 0.1186433 , ..., 0.01116119, -0.01081911, -0.02764229], ..., [-0.02861831, -0.09723745, -0.0620534 , ..., -0.12824798, 0.02253188, -0.10220653]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)] dense2 [array([[-0.01134968, 0.39225703, 0.11167014, -0.13503206, 0.09921449, 0.27120864, -0.41560578, 0.5887881 ], ..., [ 0.23259199, 0.45360595, -0.5073748 , -0.05056351, 0.26967663, 0.02501452, -0.03674203, 0.07765925]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)] final [array([[-0.4500377 ], [ 0.7408364 ], [-0.73072064], [ 0.17347234], [ 0.80670106], [ 0.26636422], [ 0.5733515 ], [ 0.20663929]], dtype=float32), array([0.], dtype=float32)] ``` ``` for l in model2.layers: print(l.name, l.get_weights()) ``` ``` input [] dense1 [array([[-0.01548935, -0.02027901, 0.1186433 , ..., 0.01116119, -0.01081911, -0.02764229], ..., [-0.04421869, -0.13911864, 0.09040551, ..., -0.14710264, -0.03600252, -0.12658373]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)] dense2 [array([[-0.01134968, 0.39225703, 0.11167014, -0.13503206, 0.09921449, 0.27120864, -0.41560578, 0.5887881 ], ..., [ 0.23259199, 0.45360595, -0.5073748 , -0.05056351, 0.26967663, 0.02501452, -0.03674203, 0.07765925]], dtype=float32), array([0., 0., 0., 0., 0., 0., 0., 0.], dtype=float32)] final [array([[-0.4500377 ], [ 0.7408364 ], [-0.73072064], [ 0.17347234], [ 0.80670106], [ 0.26636422], [ 0.5733515 ], [ 0.20663929]], dtype=float32), array([0.], dtype=float32)] ```
Keras reuse trained weights on CNN with different number of channels
It's impossible to change the number of channels. The weights of the model depend on the number of channels. Changing channels is changing weights. Changing weights is having a completely new model. You can only change the image size (in purely convolutional networks - without `Flatten` - the image size does not affect the number of weights). But: Frames are not channels. Take care with this. Frames are entire images, not channels of images. But it's impossible to help further without knowing the code of the original CNN. I don't know if the net is purely convolutional, if it uses the frames as samples, if it uses `TimeDistributed` frames, or if it uses recursive layers.
118641
1
118743
null
2
63
I edited my post for clarity, for the second time. Thx lpounng for the feedback. I am seeking advice on predicting debt payment within a year. Each debt has its own carachteristic wich are not easy to aggregate. Moreover, multiple debts can correspond to the same debtor, and it is common for a debtor to fail to pay multiple debts. So If I tried to calculate the probability of payment for individual debts and multiply them to get the probability of payment for all debts of a debtor I would be wrong because the debts are not independent (Bayes). I suspect that, when the time comes to predict a new value, to predict payment for a new debtor, I need to have information about all of their debts at the time. This leads me to construct a register where all information about debts for a debtor needs to be contained... so how could I go about it? I don´t see possible to make a column for each feature for each of the n possible debts per debtor (i.e. "amount paid debt 1", "amount paid debt 2", ... , "amount paid debt n".) because of NA values that would populate most columns and that I can´t know n in advance (maybe a new debtor arrives with n+1 debts... and how would I go about imputing values for that load of variables). On the other hand if I aggregate the amount paid for every debt I would loose information (not that much in this feature... but think more of number of installments paid... which would be kind of heterogeneous when summed up... like summing apples and bananas). So... is it possible to use as features the individual debts fields or I should make a model doing feature engineering predictors that summarise total debt carachteristics? Thank you
Is there a way to use debt details without aggregating them to predict the probability of payment of every debt per debtor?
CC BY-SA 4.0
null
2023-02-19T19:35:54.797
2023-02-23T13:25:40.207
2023-02-23T12:13:50.303
146142
146142
[ "machine-learning", "predictive-modeling", "feature-engineering" ]
The most straightforward approach is, I think, to engineer features per customer. To your "sum number of installments paid", something like "average percent of installments paid" might fix the heterogeneity. In the extreme/limiting case where each customer either repays or defaults on all debts, you have a "multiple instance learning" problem. The [wikipedia article](https://en.wikipedia.org/wiki/Multiple_instance_learning) for that subject might give some inspiration for ways to adapt to this setting.
Predict next month's loan balance from historical data
You have quick and easy but rough predictions: - The "meteo method": next month will be the same as last month. No as bad as you may think, and you can even replace last month by an average of the last 6 month. - Linear regression: the meteo method will fail if there is a constant growth. Make a linear regression in function of time $x(t) = a \cdot t + b$. Your forecast will be $x(today+1)$. - Exponential regression: use a regression on a growth in percentage (exponential growth). You can find the best $a$, $b$ fitting $x(t) = a \exp(b \cdot t)$, or the best $c$, $d$ to fit $log(x(t)) = c \cdot t + d$. These are the same formula in disguise and you can use a linear regression for the last one (fit $y(t) = log(x(t))$ instead of $x(t)$). Note that this a "scientific" version of you method 1. - Exponential smoothing: It is an average giving more weight to the last values. There is a technical trick that makes computation specially easy if you have to forecast every month. In reference with the wikipedia page, take $\alpha = 1/6 = 0.1667$ for a 6 month history. In the future, you will ba able to add a trend (Double exponential smoothing) and a seasonality (Holt-Winter model). I would avoid method 2, consisting in adding forecast of individual account. This method works when every thing is linear so that errors cancel each others. When the the process is not linear, you'll have a systematic bias which will add together. I also mention that your main challenge will more probably be the seasonality than the main trend.
118644
1
118702
null
2
30
I'm working on a project that involves generating a unique ID for a given biometric (such as an iris image). I'm interested in exploring the use of ML techniques for feature extraction and ID generation. Specifically, I'm interested in how various measurements of the biometrics of some person (such as photos of the person’s same iris under different angles) could be almost always converted to the same unique ID. My main questions are: - What machine learning techniques could be used to extract the features of a biometric? I'm was planning to work with biometric data in an image format, and I guess that CNNs or deep embedding networks could be useful for this task. Are there any other techniques that could be used (e.g., PCA)? - What format should the features be represented as? Should they be stored as a vector, a matrix, or some other format? - How could the features be converted to a an ID? I'm currently considering using techniques such as hashing or one-way encoding. Are there any other techniques that could be used, and what are the pros and cons of each? - What could be done to ensure that variations of the input biometric would (almost) always lead to the same ID? I am currently at the research step, thus did not attempt yet to implement such a pipeline. In case you have some questions regarding the project, don’t hesitate to ask, I will try to be more specific if necessary. I'm also open to any other advice or tips on how to approach a project like this. Thank you in advance for your help!
What ML techniques could be used for biometric feature extraction and ID generation?
CC BY-SA 4.0
null
2023-02-19T21:49:22.440
2023-02-21T20:27:27.223
null
null
146145
[ "machine-learning", "feature-engineering", "feature-extraction", "encoding" ]
A simple one-way encoding (a hash) is very sensitive to noise. So where ML would come in is feature extraction: trying to extract the strong signal from an image and not any of the noise. But you are very unlikely to be able to do this perfectly. I believe even for the much-studied MNIST data set, state of the art is less than 100% (though there are a few mis-labelled examples, which doesn't help). BTW, another challenge is you will have very few data samples per label - you are going to have something like 60,000 iris photos for 55,000 people I imagine? Compared to MNIST with 60,000 samples for just 10 labels. Instead what you do is a nearest-neighbor search. So, your CNN (or whatever ML model you use) can give you say a 256-dim embedding instead of predicting a label. And then you use that to search your iris database, to find the closest match. If you search for approximate nearest neighbor search you will find a range of competing options. (The "approximate" is needed, because doing it naively is O(N²) in the number of entries in your database.) Or for a more novel approach, [A Neural Corpus Indexer for Document Retrieval](https://arxiv.org/abs/2206.02743) is an interesting paper I read recently, which is doing something similar to what you want to do, though in the NLP domain. They are taking some features, and want to produce the document ID directly. There may be some ideas there.
Using ML to create unique descriptors?
If I understood correctly your question, you want a function that takes a signal (a fixed window) and outputs a 32-bit representation in a way that the correlation between it and any other signal is preserved. Mathematically speaking given signals $s_1$ and $s_2$ in $S$ and a correlation function $corr(s_1, s_2)$ you want some function $f : S \rightarrow B$ (where $B$ is the space of 32-bit binary numbers) that you could use another correlation function for instance $corr_f(b_1, b_2) \approx corr(s_1, s_2)$. If that is what you want you should at hashing techniques in particular [learning to hash](https://cs.nju.edu.cn/lwj/L2H.html). Essentially in hashing want you do is you represent your input by binary numbers in a way that the [hamming distance](https://en.wikipedia.org/wiki/Hamming_distance) between the binary numbers preserves some target similarity (distance) function such as the correlation. In particular for cross-correlation (inner product) you have some methods based on [random projections](https://en.wikipedia.org/wiki/Locality-sensitive_hashing#Random_projection). So once you have learned (designed) your hashing function $f$, what I would do is: ``` b1 = f(s1) send b1 receive b1 b2 = f(s2) return h(b2, b1) # this value is going to tell you if the signals are correlated ```
118697
1
118715
null
0
63
I am working on building a sentiment analyzer, the data I would like to analyze is social media data from twitter, once I have created a the model I want to integrate it into a simply webpage. I have tried two options: - Create my own model from scratch, meaning train a word2vec model to perform word embedding, convert my labelled dataset into vectors and train them using Logistic regression, Random forest or SVM. - Fine tune a BERT model using my dataset. option 1.. Using word2vec and SVM I was able to get the following results: ``` precision recall f1-score support 0 0.74 0.67 0.70 1310 1 0.77 0.82 0.79 1716 accuracy 0.76 3026 macro avg 0.75 0.75 0.75 3026 weighted avg 0.75 0.76 0.75 3026 ``` option 2.. I fined tuned BERT using the following code [link](https://#https://github.com/prateekjoshi565/Fine-Tuning-BERT/blob/master/Fine_Tuning_BERT_for_Spam_Classification.ipynb) and was able to achieve the following results after 100 epochs: ``` precision recall f1-score support 0 0.68 0.65 0.66 983 1 0.74 0.77 0.75 1287 accuracy 0.71 2270 macro avg 0.71 0.71 0.71 2270 weighted avg 0.71 0.71 0.71 2270 ``` I used the same dataset for both option 1 and 2, BERT used a smaller subset for validation What I would like to know: Is there any advantages in going with option 1.? Does BERT have any disadvantages when it comes to data from social media (data is rather unclean and a lot of slang).
Sentiment analysis BERT vs Model from scratch
CC BY-SA 4.0
null
2023-02-21T17:06:08.237
2023-02-22T09:50:34.367
2023-02-21T17:07:11.510
146213
146213
[ "machine-learning", "bert", "sentiment-analysis" ]
In general, BERT is a much stronger model. Word embeddings only represent isolated words, whereas BERT considers the sentence context and how the words interact. With user-generated data, word embeddings might have plenty of OOV. In contrast, BERT uses subword tokenization that might partially compensate for that, although it is not ideal. It might be better to search for BERT-like models pre-trained specifically on social network data (e.g., [TwHIN-BERT](https://huggingface.co/Twitter/twhin-bert-base)). The only disadvantage is higher computational complexity than classical machine learning over word embeddings. As with any large model, it is prone to overfitting and catastrophic forgetting when not fine-tuned carefully. This might be the reason why you get slightly worse results with BERT. You can try smaller learning, training for fewer epochs, or freezing some layers.
Limitations of NLP BERT model for sentiment analysis
BERT is pre-trained on two generic tasks: masked language modeling and next sentence prediction. Therefore, those tasks are the only things it can do. If you want to use it for any other thing, it needs to be fine-tuned on the specific task you want it to do, and, therefore, you need training data, either coming from human annotations or from any other source you deem appropriate. The point of fine-tuning BERT instead of training a model from scratch is that the final performance is probably going to be better with BERT. This is because the weights learned during the pre-training of BERT serve as a good starting point for the model to accomplish typical downstream NLP tasks like sentiment classification. In the article that you referenced, the authors describe that they fine-tune [a Chinese BERT model](https://huggingface.co/hfl/chinese-bert-wwm-ext) on their human-annotated data multiple times separately: - To classify whether a Weibo post refers to COVID-19 or not. - To classify whether posts contained criticism or support. - To identify posts containing criticism directed at the government or not. - To identify posts containing support directed at the government or not. Fine-tuning BERT usually gives better results than just training a model from scratch because BERT was trained on a very large dataset. This makes the internal text representations computed by BERT more robust to infrequent text patterns that would be hardly present in a smaller training set. Also, dictionary-based sentiment analysis tends to give worse results than fine-tuning BERT because a dictionary-based approach would hardly grasp the nuances of language, where not only does a "not" change all the meaning of a sentence, but any grammatical construction can give subtle meaning changes.
118706
1
118808
null
5
219
Problem Statement Imagine there are two almost identical images with annotations (bounding boxes for certain objects), where one is so-called golden image (template) containing all must-have objects (ground truth), and the other one is the query image (input) let's say with lesser objects. Given that we already have Pandas dataframe represnetation of objects with bounding box infos for each of the images like, how one can perform a spatial left join on template between bounding boxes (objects) so that we can easily identify the missing objects in the input? Example. Let's say the template looks like: [](https://i.stack.imgur.com/DTRCa.jpg) and the corresponding template dataframe: ``` name xmin ymin xmax ymax 0 big fish 251 504 485 654 1 small fish 583 572 748 660 2 big fish 1080 484 1236 597 3 big fish 574 122 1076 505 4 big fish 1351 187 1583 369 5 small fish 369 31 506 115 6 small fish 1081 148 1111 190 7 small fish 684 505 732 535 8 small fish 939 521 992 570 9 small fish 417 661 497 705 10 small fish 743 598 792 642 11 small fish 667 657 708 691 ``` And the input image looks like: [](https://i.stack.imgur.com/zxlS1.jpg) and the corresponding input dataframe: ``` name xmin ymin xmax ymax 0 small fish 342 16 478 101 1 big fish 221 490 459 646 2 small fish 579 564 723 641 3 big fish 1342 161 1558 337 4 big fish 557 102 1045 492 5 small fish 1049 132 1087 176 6 small fish 389 652 484 694 7 small fish 914 514 964 556 8 small fish 639 640 688 676 ``` Expected Result In this example, there fishes are not present on the input image (missing), and I would lile to extract and identify that information by cross matching objects between the template and input dataframes. Ideally, I seek to have a subset of template dataframe only containing the missing objects in the input image: ``` name xmin ymin xmax ymax 0 big fish 1080 484 1236 597 1 small fish 684 505 732 535 2 small fish 743 598 792 642 ``` and the overlay on the input image would look like: [](https://i.stack.imgur.com/ZuaCN.jpg) --- Attempts I have tried to use [geopandas.GeoDataFrame.sjoin](https://geopandas.org/en/stable/docs/reference/api/geopandas.GeoDataFrame.sjoin.html#geopandas-geodataframe-sjoin), borrowing functionalities for Points, Polygons for merging the dataframes. When bounding boxes are reasonably distanced from one and other, it would work as expected. However, when are are in proximity of one another, and often even overlaps, then geopandas sjoin and any other merging functionalities wouldn't work. I have also tried to use distances (centers of bounding boxes) together with IoU (Intersection over Union) to cross match these geometries, but it wouldn't inheritly know how far it should look to cross match, and defining the threshold is not ideal, because we simply wouldn't know how many objects to expect to include or exclude (unless it is harded in the logic and definitely hard to maintain). Question: Is there a better, smarter and efficient way to accomplish this? P.S. (Materials): In order to make things easy to contribute and access these images, xmls, and dataframes, I have put everything in [a pulbic Github repo](https://github.com/mmortazavi/pandas_spatial_merge_objectdetection), containing also a Notebook with some functions and steps using geopandas.GeoDataFrame.sjoin!
Spatial Join Pandas Dataframes of Bounding Boxes (cross match)
CC BY-SA 4.0
null
2023-02-21T23:22:25.100
2023-03-02T03:17:46.797
2023-02-21T23:27:30.410
44456
44456
[ "python", "pandas", "object-detection", "geospatial" ]
Edited after comments in other answers: Generally speaking, I would reframe this as a linear sum assignment problem. This can be solved using a modified version of [Munkres algorithm](https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.linear_sum_assignment.html) allowing a cost for non-assignment. wich time complexity is pretty bad ($O(n^3)$), but will work for a dozen fishes. For, reference, the [Matlab version](https://www.mathworks.com/help/vision/ref/assigndetectionstotracks.html) allows you to handle tracks that end and start across frames, i.e fishes that disappear and appear between frames. To use the Munkres algorithm, you need to define a cost matrix, with $N_{tracks}$ rows (first frame) and $N_{detections}$ columns (second frame). The Munkres algorithm will minimize the global assignment cost. ### Case 1: significant overlap of bounding boxes across frames (tracking problem): For the track $i$ in the first frame and detection $j$ in the next one, you can define the cost as $IoU(i, j)$ which is the [intersection over union](https://hasty.ai/docs/mp-wiki/metrics/iou-intersection-over-union#:%7E:text=To%20define%20the%20term%2C%20in,matches%20the%20ground%20truth%20data.) of the two bounding boxes for track $i$ and detection $j$. You could also consider using the distance between the centroid of the bounding boxes $d(i, j)$ or a combination of the two with a total cost such as $C(i, j) = IoU(i, j) + \alpha \times d(i, j)$ with $\alpha$ a parameter to determine to tune the respective weight of each cost in the full cost matrix. If you are only using bounding boxes the IoU is pretty easy to compute. ### Case 2: no significant overlap of bounding boxes across frames (detection problem): In that case, you cannot rely on positional information. But hopefully, the fish shapes remain largely unchanged. So you can build descriptors/features, for instance: - bounding box area: measure the number of pixels in the bounding box (this assumes the fish also didn't change orientation drastically, as fishes are pretty flat so the area from the side will be very different from the area from the front. You could consider using the longest side of the bounding box to mitigate this - color composition: create a binned RGB histogram from all the pixels in the fish (ideally you would have access to a finer segmentation than just a bounding box to make it less sensitive to the background color) You could also use feature descriptors such as SIFT, AKAZE, etc... But it all comes down to the same two steps: - find a good way to compare any pair of objects across frames - make an optimal decision about how to match them across frames and how to decide which are missing The second part will always be a linear sum assignment problem. So the only thing now is that the scipy version doesn't offer the option to specify the `unassignedTrackCost` or `unassignedDetectionCost` like the Matlab version does. And this is actually what will allow you to handle fishes appearing or disappearing the way the Matlab version does. So you will need to modify it. Looking at the picture below, you now have the `costMatrix` and you need to build the bigger matrix to be able to handle the cases when fish appear or disappear. [](https://i.stack.imgur.com/iRmhZ.png) Once you have managed to create the full cost matrix you can solve it using `linear_sum_assignment` and then find the tracks (resp. detections) that were assigned to dummy detections (resp. tracks). ## Implementation ### Getting the Cost matrix (distance only) ``` import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns from scipy.spatial.distance import cdist from scipy.optimize import linear_sum_assignment def transform(df): df["centroid_x"] = (df["xmax"] + df["xmin"]) / 2 df["centroid_y"] = (df["ymax"] + df["ymin"]) / 2 return df df0 = pd.read_csv("first_frame.csv") df1 = pd.read_csv("second_frame.csv") df0 = transform(df0) df1 = transform(df1) distance_cost_matrix = cdist(df0[["centroid_x", "centroid_y"]], df1[["centroid_x", "centroid_y"]]) cost_mat = np.log(distance_cost_matrix) #np.log(np.multiply(area_cost_matrix, distance_cost_matrix)) ``` ### Modified Munkres algorithm ``` def pseudo_inf(cost_mat, inf_func): pseudo_inf_val = inf_func(cost_mat[cost_mat != np.inf]) pseudo_cost_mat = cost_mat.copy() pseudo_cost_mat[pseudo_cost_mat == np.inf] = pseudo_inf_val return pseudo_cost_mat, pseudo_inf_val def get_costs(cost_mat, row_ind, col_ind): costs = [cost_mat[i, j] for i, j in zip(row_ind, col_ind)] return costs def assign_detections_to_tracks( cost_mat, cost_of_non_assignment=None ): # in case there are infinite value, replace them by some pseudo infinite # values inf_func = lambda x: np.max(x) * 2 pseudo_cost_mat, pseudo_inf_val = pseudo_inf(cost_mat, inf_func) assigned_rows = [] unassigned_rows = [] assigned_cols = [] unassigned_cols = [] full_cost_mat = None # basic case, handled by linear_sum_assignment directly if cost_of_non_assignment is None: assigned_rows, assigned_cols = linear_sum_assignment(pseudo_cost_mat) assignment_costs = get_costs(cost_mat, assigned_rows, assigned_cols) # if one cost of non assignment is provided, use it else: # build the pseudo-array top_right_corner = np.full((cost_mat.shape[0], cost_mat.shape[0]), pseudo_inf_val) np.fill_diagonal(top_right_corner, cost_of_non_assignment) bottom_left_corner = np.full((cost_mat.shape[1], cost_mat.shape[1]), pseudo_inf_val) np.fill_diagonal(bottom_left_corner, cost_of_non_assignment) top = np.concatenate((cost_mat, top_right_corner), axis=1) zero_corner = np.full(cost_mat.T.shape, 0) # zero_corner = np.full(cost.shape,cost_of_non_assignment) bottom = np.concatenate((bottom_left_corner, zero_corner), axis=1) full_cost_mat = np.concatenate((top, bottom), axis=0) # apply linear assignment to pseudo array row_idxs, col_idxs = linear_sum_assignment(full_cost_mat) # get costs for row_idx, col_idx in zip(row_idxs, col_idxs): if row_idx < cost_mat.shape[0] and col_idx < cost_mat.shape[1]: assigned_rows.append(row_idx) assigned_cols.append(col_idx) elif row_idx < cost_mat.shape[0] and col_idx >= cost_mat.shape[1]: unassigned_rows.append(row_idx) elif col_idx < cost_mat.shape[1] and row_idx >= cost_mat.shape[0]: unassigned_cols.append(col_idx) # full_costs = get_costs(full_cost_mat, row_idxs, col_idxs) assignment_costs = get_costs( full_cost_mat, assigned_rows, assigned_cols) return assigned_rows, assigned_cols, unassigned_rows, unassigned_cols, full_cost_mat, assignment_costs ``` ### Peforming the assignment: `cost_of_non_assignment` was tuned looking at a histogram of cost_mat.ravel() ``` ( assigned_rows, assigned_cols, unassigned_rows, unassigned_cols, full_costs, assignment_cost ) = assign_detections_to_tracks(cost_mat, cost_of_non_assignment=10) unassigned_rows, unassigned_cols ``` ### Results ``` df0.iloc[unassigned_rows, :] ``` ``` name xmin ymin xmax ymax centroid_x centroid_y 0 big fish 1080 484 1236 597 1158.0 540.5 1 small fish 684 505 732 535 708.0 520.0 2 small fish 743 598 792 642 767.5 620.0 ``` it works!
Combine Pandas DataFrame Rows Based on Matching Data and Boolean
I believe what you need is [agg](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.agg.html) from pandas. You can pass a dictionary of the different aggregations you need for each column: ``` import pandas as pd df = pd.DataFrame({'year':['2017','2018','2019','2019'], 'ISO Week':[1,2,3,3], 'Price':[5,10,15,20], 'quantity':[1,2,3,4], 'organic':[True, False, True, True]}) ISO Week Price organic quantity year 0 1 5 True 1 2017 1 2 10 False 2 2018 2 3 15 True 3 2019 #<------ combine 3 3 20 True 4 2019 #<------ combine df.groupby(['year','ISO Week','organic'], as_index=False).agg({'Price':'mean', 'quantity':'sum'}) year ISO Week organic Price quantity 0 2017 1 True 5.0 1 1 2018 2 False 10.0 2 2 2019 3 True 17.5 7 ```
118707
1
118731
null
0
478
I'm attempting to port some old reports from Pandas to Polars. I am happy with the data, and am now attempting to plot a few time series. This is working, but I'm finding that matplotlib just doesn't seem to want to cooperate with Polars' `pl.Datetime` types. When initially populating the dataframe, I am doing this to parse the timestamps: ``` # using floats so we don't quantize when doing mean() aggregations later df = df.with_columns( [ pl.col("AllocCPUS").cast(pl.Float64), pl.col("AllocNodes").cast(pl.Float64), pl.col("Start").str.strptime(pl.Datetime, fmt="%FT%T"), pl.col("End").str.strptime(pl.Datetime, fmt="%FT%T"), ] ) ``` This seems to work correctly. Skipping forward over a bunch of (what should be irrelevant) logic, we come to trying to generate a plot. Here's what one of the dataframes looks like just before calling matplotlib: ``` shape: (5, 2) ┌─────────────────────┬───────────┐ │ Start ┆ AllocCPUS │ │ --- ┆ --- │ │ datetime[μs] ┆ f64 │ ╞═════════════════════╪═══════════╡ │ 2022-01-26 00:00:00 ┆ 1.0 │ │ 2022-01-29 00:00:00 ┆ 1.0 │ │ 2022-01-31 00:00:00 ┆ 4.0 │ │ 2022-02-01 00:00:00 ┆ 9.0 │ │ 2022-02-02 00:00:00 ┆ 34.0 │ └─────────────────────┴───────────┘ ``` I attempt to plot this, like so: ``` ## relevant imports from earlier in the project # #import polars as pl #import matplotlib #import matplotlib.pyplot as plt #%matplotlib inline fig = plt.figure(figsize=(12,8), facecolor='white') ax_cpu = fig.add_subplot(111) ax_ram = ax_cpu.twinx() ax_cpu.set_ylim(ymax=cores_max) ax_ram.set_ylim(ymax=memory_max) ax_cpu.set_ylabel('Cores', fontsize=14, color='purple') ax_ram.set_ylabel('RAM (GB)', fontsize=14, color='orange') ax_cpu.plot(binned_df_cpu_all, color='purple', alpha=0.8) ax_ram.plot(binned_df_ram_all, color='orange', alpha=0.8) ax_cpu.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%Y-%m')) ax_ram.xaxis.set_major_formatter(matplotlib.dates.DateFormatter('%Y-%m')) plt.title('Total Cores/RAM Allocation\n{start} through {end}'.format( start=datestamp_min.date(), end=datestamp_max.date())) ``` This prints the graph as I expect to see, with a correct title (note: dates look correct there!), Y axis ranges, etc. However, across the X axis the dates are formatted "1970-01" and so on. If I drop the two `set_major_formatter()` calls, the dates are simply integers starting from 0, one per day. [](https://i.stack.imgur.com/AtBn8.png) I've been trying to find an explanation or workaround, but unfortunately all of my searching is only turning up people asking how to initially parse strings into dates in Polars - something I'm past, here. Alternatively, I can occasionally find vague references to a problem with matplotlib and pandas where they had differences between epochs, however I am not using pandas here. Additionally, even if I cast the dtype to a python `datetime.datetime`, or even cast the whole polars dataframe to either a pandas dataframe or a numpy 2d array, the behavior does not change (which I thought would rule out an interoperability issue). I'm at a complete loss as to what is going wrong. I can't imagine I'm running into a bug, here, and I must be doing something wrong? Just in case, however, I want to note that I a have `polars==0.16.7` and `matplotlib==3.7.0`. I am running this within JupyterLab at this time.
pl.datetime plots as days since epoch or 1970, if formatted - polars and matplotlib
CC BY-SA 4.0
null
2023-02-22T00:00:43.073
2023-02-22T22:18:06.373
null
null
146055
[ "python", "matplotlib", "jupyter", "python-polars" ]
I managed to solve this myself today. I don't understand why, but explicitly passing the datetime series and value series to matplotlib, instead of the dataframe (or numpy array) directly, does the job. ``` ## relevant imports from earlier in the project # #import polars as pl #import matplotlib #import matplotlib.pyplot as plt #%matplotlib inline fig = plt.figure(figsize=(12,8), facecolor='white') ax_cpu = fig.add_subplot(111) ax_ram = ax_cpu.twinx() ax_cpu.set_ylim(ymax=cores_max) ax_ram.set_ylim(ymax=memory_max) ax_cpu.set_ylabel('Cores', fontsize=14, color='purple') ax_ram.set_ylabel('RAM (GB)', fontsize=14, color='orange') cpu_axes_time = binned_df_cpu_all.get_column("Start") cpu_axes_value = binned_df_cpu_all.get_column("AllocCPUS") ram_axes_time = binned_df_ram_all.get_column("Start") ram_axes_value = binned_df_ram_all.get_column("ReqMem") ax_cpu.plot(cpu_axes_time, cpu_axes_value, color='purple', alpha=0.8) ax_ram.plot(ram_axes_time, ram_axes_value, color='orange', alpha=0.8) plt.title('Total Cores/RAM Allocation\n{start} through {end}'.format( start=datestamp_min.date(), end=datestamp_max.date())) # optional, rotates them a bit nicely fig.autofmt_xdate() ``` ```
polars: parsing a funky datetime format with optional fields
I am not too familiar with `polars` myself, but have you tried using the [str.extract method](https://pola-rs.github.io/polars/py-polars/html/reference/expressions/api/polars.Expr.str.extract.html)? You can use this to extract the days/hours/etc. from the input on the whole column without using `apply`. I tested it using the three examples you give and it gives the expected result, and based on 100.000 rows it is roughly four times faster than using `apply`, with the relative speed likely improving more as you are increasing the number of rows. ``` import polars as pl df = pl.DataFrame({ "elapsed": ["02:42:05", "6-02:42:05", "6-02:42:05.745"] }) ( df # extract days/hours/minutes/seconds/microseconds and cast to ints .with_columns([ pl.col("elapsed").str.extract(r"(\d+)-", 1).cast(pl.UInt32).alias("days"), pl.col("elapsed").str.extract(r"(\d{2}):\d{2}:\d{2}", 1).cast(pl.UInt32).alias("hours"), pl.col("elapsed").str.extract(r"\d{2}:(\d{2}):\d{2}", 1).cast(pl.UInt32).alias("minutes"), pl.col("elapsed").str.extract(r"\d{2}:\d{2}:(\d{2})", 1).cast(pl.UInt32).alias("seconds"), pl.col("elapsed").str.extract(r"\.(\d+)", 1).cast(pl.UInt32).alias("microseconds"), ]) # calculate the number of seconds elapsed .with_columns([ ( pl.col("seconds").fill_null(0) + (pl.col("microseconds").fill_null(0) / 1e6) + (pl.col("minutes").fill_null(0) * 60) + (pl.col("hours").fill_null(0) * 3600) + (pl.col("days").fill_null(0) * 86400) ).alias("result") ]) ) ``` Which gives the following dataframe: |elapsed |days |hours |minutes |seconds |microseconds |result | |-------|----|-----|-------|-------|------------|------| |02:42:05 |0 |2 |42 |5 |0 |9725 | |6-02:42:05 |6 |2 |42 |5 |0 |528125 | |6-02:42:05.745 |6 |2 |42 |5 |745 |528125 |
118767
1
118768
null
0
161
Why are recent dialog agents, such as ChatGPT, BlenderBot3, and Sparrow, based on the decoder architecture instead of the encoder-decoder architecture? I know the difference between the attention of the encoder and the decoder, but in terms of dialogue, isn't the attention of the encoder-decoder better?
What are the advantages of autoregressive over seq2seq?
CC BY-SA 4.0
null
2023-02-24T09:43:33.993
2023-02-24T10:40:34.293
null
null
146312
[ "deep-learning", "nlp", "transformer", "sequence-to-sequence", "gpt" ]
Encoder-decoder architectures are normally used when there is an input sequence and an output sequence, and the output sequence is generated autoregressively. The encoder processes the whole input sequence at once, while the decoder receives the representations computed by the encoder and generates the output sequence autoregressively. The paradigmatical example is machine translation. To train an encoder-decoder model, you need pairs of input and output sequences. Decoder-only architectures are normally used when you want to generate text autoregressively and there is no input (i.e. unconditional text generation), or when the input is the "prefix" of the output. The paradigmatical examples are language models. To train a decoder-only model, you need plain sequences. While you can train a chatbot with an encoder-decoder architecture where the input is the user question or prompt and the output is the answer, this poses some problems: - Difficulty to pre-train on massive text datasets scraped from the internet: large language models rely on being trained on large amounts of text downloaded from the internet. Encoder-decoder architectures need to have input and output sequences, which makes it more difficult to just feed whatever text is found on the internet as training data. - Limited context: with encoder-decoder architectures, you need to define the input and output sequences to train the model. If you define them to be respectively the question/prompt from the user and the expected answer, then you are ignoring the previous questions and answers within the same conversation, which may contain key information to answer properly the following question. To properly use some hypothetical conversational training dataset making the model use the previous conversation as context, you would need, for every answer used as output, to give as input the whole previous conversation up to that moment. This is not practical. With decoder-only architectures you just feed the whole conversation to the model and that's it. Apart from that, the computations of encoder-decoder attention are exactly the same as the decoder-only attention's, so no advantage there. In fact, it's been shown that [using decoder-only architectures offers the same quality as encoder-decoder architectures, for machine translation at least](https://papers.nips.cc/paper/2018/hash/4fb8a7a22a82c80f2c26fe6c1e0dcbb3-Abstract.html).
What are the benefits and tradeoffs of a 1D conv vs a multi-input seq2seq LSTM model?
You will have to address a varying sequence length, one way or another. will likely have either perform some padding (e.g. using zeros to make all sequences equal to a max. sequence length). Other approaches, e.g. used within NLP, to make training more efficient, are to splice series together (sentences in NLP), using a clear break/splitter (full-stops/periods in NLP). Using a convolutional network sort of makes sense to me in your situation, predicting a binary output. As the convolutions will be measuring correlations in the input space, I can imagine the success of the model will be highly dependend on the nature of the problem. For some intuition of conv nets uses for sequences, have a look at [this great introductory article](http://www.wildml.com/2015/11/understanding-convolutional-neural-networks-for-nlp/). If each of your six sequences are inter-related, it allows the convolutions to pick up on those cross-correlations. If they are not at all related, I would proably first try Recurrent Networks (RNNs), such as the LSTM you mentioned. Getting you head around the dimensions of a multi-variate LSTM can be daunting at first, but once you have addressed theissue of varying sequence length, it becomes a lot more manageable. I don't know what framework you are using, but as an example in Keras/Tensorflow, the dimensions for you problem would be something like: ``` (batch_size, sequence_length, num_sequences) ``` `batch_size` can be set to `None` to give flexibility around your available hardware. `sequence_length` is where you need to decide on a length to use/create via padding/trimming etc. `num_sequences = 6` :-) If helpful, check out these threads, where I explained that stuff in more detail. - Multi-dimentional and multivariate Time-Series forecast (RNN/LSTM) Keras - Keras LSTM with 1D time series
118771
1
118784
null
0
59
Explainable AI can be achieved through intrinsically explainable models, like logistic and linear regression, or post-hoc explanations, like [SHAP](https://shap.readthedocs.io/en/latest/). I want to use an intrinsically explainable model on tabular data for a classification task. However, logistic and linear regression have poor performance. Are there any other intrinsically explainable models that have higher performance?
Which intrinsically explainable model has the highest performance?
CC BY-SA 4.0
null
2023-02-24T14:49:07.200
2023-02-25T13:55:36.643
2023-02-25T06:19:03.577
138508
138508
[ "machine-learning", "linear-regression", "logistic-regression", "explainable-ai", "interpretation" ]
To add a bit more to @noe 's answer: when you have a small number of features, explainable models can do a lot for you because they usually operate by making a prediction directly using the input features, without any intermediate features. When the data is structured and the number of features is small there isn't much value in choosing a more complex model while losing explainability. With a large number of features that changes. You have two issues. Models that make predictions directly from input data no longer have simple explanatory values. Absent feature engineering, it is probably best to use a more powerful model that can use a smaller number of secondary features in its decision. For example, a two-layer neural network is essentially two layers of logistic regression. So you can still analyze which secondary features are useful for the final layer. Then you can analyze to see what those features correlate within your data set.
Which, if any, machine learning algorithms are accepted as being a good tradeoff between explainability and prediction?
> Is there are any literature enumerating the characteristics of algorithms which allow them to be explainable? The only literature I am aware of is the recent [paper](https://arxiv.org/pdf/1602.04938v1.pdf) by Ribero, Singh and Guestrin. They first define explainability of a single prediction: > By “explaining a prediction”, we mean presenting textual or visual artifacts that provide qualitative understanding of the relationship between the instance’s components (e.g. words in text, patches in an image) and the model’s prediction. The authors further elaborate on what this means for more concrete examples, and then use this notion to define the explainability of a model. Their objective is to try and so-to-speak add explainability artificially to otherwise intransparent models, rather than comparing the explainability of existing methods. The paper may be helpful anyway, as tries to introduce a more precise terminology around the notion of "explainability". > Are there machine learning models commonly accepted as representing a good tradeoff between the two? I agree with @Winter that elastic-net for (not only logistic) regression may be seen as an example for a good compromise between prediction accuracy and explainability. For a different kind of application domain (time series), another class of methods also provides a good compromise: Bayesian Structural Time Series Modelling. It inherits explainability from classical structural time series modelling, and some flexibility from the Bayesian approach. Similar to logistic regression, the explainability is helped by regression equations used for the modelling. See [this paper](http://people.ischool.berkeley.edu/~hal/Papers/2013/pred-present-with-bsts.pdf) for a nice application in marketing and further references. Related to the Bayesian context just mentioned, you may also want to look at probabilistic graphical models. Their explainability doesn't rely on regression equations, but on graphical ways of modelling; see "Probabilistic Graphical Models: Principles and Techniques" by Koller and Friedman for a great overview. I'm not sure whether we can refer to the Bayesian methods above as a "generally accepted good trade-off" though. They may not be sufficiently well-known for that, especially compared to the elastic net example.
118779
1
118781
null
0
24
This may be a silly question, but if I got a deterministic process, for instance, a function (in the mathematical sense) that happens to be computationally expensive to evaluate, and I decided to approximate it with logistic regression, an artificial neural network, or any kind of machine learning model: - Is there several (random and continuous) training instances that can ensure "almost" perfect accuracy for my learning model? If yes, how to compute this number? - In general, can I expect at least "better" models if the process is deterministic? - Is overfitting a problem in this case? My intuition somehow says no, but I cannot prove it. Thanks in advance!
Machine learning / statistical model of a deterministic process: how large must my training set be to ensure almost perfect accuracy?
CC BY-SA 4.0
null
2023-02-24T19:50:30.013
2023-02-25T10:19:03.620
2023-02-25T10:19:03.620
111867
132612
[ "machine-learning", "neural-network", "machine-learning-model", "logistic-regression" ]
- No, there is no threshold for the amount of training data that can ensure ~100% accuracy for arbitrary models and data. Not all models have the same capacity (e.g. linear regression can represent linear functions on the inputs) and not all deterministic process have the same complexity. Also, not every sample of a process input and output values reflect all the possible cases that can be. This entirely depends on the model and the data. - No, a deterministic model does not necessarily imply that models achieve high accuracy. Like the first point, whether models are able to achieve better accuracy or not depends entirely on the model we choose and the data. - Yes, overfitting is a problem in your case. Achieving high accuracy on the training dataset does not equate to achieving high accuracy on the whole input data domain. If at inference time the model sees data that it has not seen during training, overfitting is always a risk. Also, take into account that there are multiple sources of stochasticity. A deterministic process lacks [aleatoric uncertainty](https://en.wikipedia.org/wiki/Uncertainty_quantification#Aleatoric_and_epistemic) but may have epistemic uncertainty, e.g. stochastic measurement errors, lack of data in some areas of the process' input/output domains. All these factors don't necessarily mean that using a machine learning model to represent a deterministic process is a bad idea. Just that machine learning is not a silver bullet for representing any process, either deterministic or not. In this situation, the model's performance should be evaluated with the same rigour as with any other stochastic process. As a side note, if you know a closed-form expression for your deterministic process, maybe you can go for a mathematical approximation like the [Taylor expansion](https://en.wikipedia.org/wiki/Taylor_series), the [Laurent series](https://en.wikipedia.org/wiki/Laurent_series), the [Chebyshev polynomials](http://en.wikipedia.org/wiki/Chebyshev_polynomials), or the [Padé approximant](https://en.wikipedia.org/wiki/Pad%C3%A9_approximant).
What is the approx minimum size of dataset required to build 90% correct model?
There is no theory or general case that sets the size of dataset required to reach any target accuracy. Everything is dependent on the underlying, and usually unknown, statistics of your problem. Here are some trivial examples to illustrate this. Say want to predict the sex of a species of frog: - It turns out the skin colour is a strong predictor for the species Rana determistica, where all males are yellow and all females blue. The minimal dataset to get 100% accuracy on the prediction task is data for two frogs, one of each sex. - It turns out the skin colour is uncorrelated for the species Rana stochastica, where 50% of each sex are yellow and the other 50% are blue. There is no size of dataset of frog colour labelled with sex that will get you better than 50% accuracy on the task. - However, Rana stochastica does have eye colouration with almost determistic relationship to the sex of the creature. It turns out that 95% of males have orange eyes and 95% of females have green eyes (with only those two eye colours possible). Those are predictive variables that are strong enough that you can get 95% accuracy if you can discover the relationship. Some related theory worth reading to do with limitations of statistical models is [Bayes error rate](https://en.wikipedia.org/wiki/Bayes_error_rate). In the last case, simply predicting "male" for orange eyes and "female" for green eyes will give you 95% accuracy. So the question is what size of dataset would guarantee a model would both make those predictions, plus give you the confidence that you had beaten your 90% accuracy goal? It can be figured out, assuming you collect labelled sample data at random - note that there is a good chance that models trained on very little data would get 95% accuracy, but that it could take a lot more data for a test set in order for you to be confident that you really had a good enough result. The maths to demonstrate even this simple case is long-winded and complex (if I were to outline the theory being used), and does not actually help you, so I am not going to try and produce it here. Plus of course I chose 95%, but if the eye colour relationship was only 85% predictive of sex, then you would never achieve 90% accuracy. With a real project you have many more variables and at best only a rough idea on how they might correlate with the target variable or each other in advance, so you cannot do the calculation. I think instead it is more productive to look at your reason for wanting a theory to choose your dataset size: > Such a theory would also help me to gather data until I reach that point and then do some productive work. Sadly you cannot do this theoretically. However, you can do a few useful things: ## Plot a learning curve against data set size I'd recommend this as your approach here. The driving question behind your question is: Will collecting more data improve my existing model? Using the same cross-validation set each time, train your model with increasing amounts of data from your training set. Plot the cross-validated accuracy against number of training samples, up to the whole training set that you have so far. - If the graph has an upward slope all the way to end, then this implies that collecting more data will improve accuracy for your current model. - If the graph is nearly flat, with accuracy not improving towards the end, then it is unlikely that collecting more data will help you. This does not tell you how much more data you need. An optimistic interpretation could take the trend line over the last section of the graph and project it to where it crosses your target accuracy. However, normally the returns for more data will become less and less. The training curve will asymptotocally approach some maximum possible accuracy for the given dataset and model. What plotting the curve using the data you have does is allow you to see where you are on this curve - perhaps you are still in the early parts of it, and then adding more data will be a good investment of your time. ## Reassess your features and model If your learning curve is not promising, then you need to look in more detail. Here are some questions you can ask yourself, and maybe test, to try and progress. - Your features: Are there more or different features that you could collect, instead of focussing on collecting more of the same? Would some feature engineering help - e.g. is there any theory or domain expert knowledge from the problem that you can turn into a formula and express as a new feature? - Your model: Are there any hyper-parameters you can tune to either get more out of the existing data, or improve the learning curve so that is worth going back to get more data? Would an entirely different model help? Deep learning models are often top performers only when there is a lot of data, so you might consider switching to a deep neural network and plotting a learning curve for it. Even if the accuracy on your current dataset is worse, if the learning curve shows a different model type might have the capacity to go further, it might be worth it. Do note however, that you could just end up with the same maximum accuracy as before, after a lot of hope and effort. Unfortunately, this is hard to predict, and you will need to make careful decisions about how much of your time is worth sinking into solving the original problem. ## Check confidence limits to choose a minimum test dataset size Caveat: This is a guide to thinking about data set sizes, especially test set sizes. I have never known anyone use this to actually select some ideal data set size. Usually it happens the other way around, you have some size of test data set made available to you, and you want to understand what that tells you about your accuracy measurements. You could determine a test set size that gives you reasonable confidence bounds on accuracy. That will mean, when you measure your 90% accuracy (or better) that you can be reasonably certain that the true accuracy is close to it. You can do this [using confidence intervals on the accuracy measure](https://machinelearningmastery.com/confidence-intervals-for-machine-learning/). As an example from the above link, you could measure a 92% accuracy on your test set, and want to know whether you are confident in that result. let's say you want to be 95% certain that you really do have accuracy > 0.9 . . . how should you choose N, the size of your test set? You know that you are 0.02 over the desired accuracy by measurement, and you want to know if this enough that you can claim to be certain that you have 90% accuracy: $$0.02 > 2 \sqrt{\frac{0.92 \times 0.08}{N}}$$ Therefore you need $$N > \frac{0.92 \times 0.08}{0.0001}$$ $$N > 736$$ This is the minimum test data set size that would give you confidence that you have met your target of 90% accuracy, provided that - you have actually measured 92% or higher accuracy - you have selecting test data at random from the target population - that you have not used the test data set to select a model (by e.g. doing this test multiple times until you got a good result) Typically you don't work backwards like this to figure N for a specific accuracy, but it is useful to understand the limits of your testing. You should generally consider how the size of the test dataset limits the accuracy by which you can confirm your model. The formula above also has limitations when measuring close to 100% accuracy, and this is because the assumptions behind it fail - you would need to switch to more complex methods, perhaps a Bayesian approach, to get a better feel for what such a result was telling you, especially if the test sample size was small. After you have established a minimum test dataset size, you could use that to guide data collection. For instance, the typical train/cv/test dataset might be 60/20/20, so with your result above you could choose an overall dataset size of 5 times 736, let's round up and call it 4000. In general this sets a lower bound on the size of dataset, as it says nothing about how hard it would be to learn a specific accuracy.
118804
1
118818
null
1
37
When reading the Tensorflow tutorial for Word Embeddings, I found two notes that confuse me: > Note: Experimentally, you may be able to produce more interpretable embeddings by using a simpler model. Try deleting the Dense(16) layer, retraining the model, and visualizing the embeddings again. And also: > Note: Typically, a much larger dataset is needed to train more interpretable word embeddings. This tutorial uses a small IMDb dataset for the purpose of demonstration. I don't know exactly what it means by "more interpretable" in those two notes, is it related to the result displayed by the embedding projector? And also why the interpretability will increase when reducing model's complexity? Many thanks!
Understand the interpretability of word embeddings
CC BY-SA 4.0
null
2023-02-25T19:45:50.193
2023-02-26T15:30:04.503
2023-02-25T19:46:26.397
146377
146377
[ "nlp", "word-embeddings" ]
"Interpretable" is not very precise in this context. In the case of deleting a dense layer, the embedding layer is more likely can learn the nontask dependent co-occurrences of words in the dataset. In the second case of adding more data, the embedding layer would learn more signals because there is an increased opportunity to "average out" the noise. In other words, word embeddings are more generalizable by reducing the complexity of the architecture and training on more data.
Word2Vec: Why do some dimensions of an embedding have an interpretation, and why does addition/subtraction of embedding vectors work?
TL;DR: A theoretical/mathematical explanation for why word2vec/GloVe embeddings of analogies appear to form parallelograms, and so can be "solved" by adding/subtracting embeddings, is given [here](http://proceedings.mlr.press/v97/allen19a/allen19a.pdf), as summarised in [this blog](https://carl-allen.github.io/nlp/2019/07/01/explaining-analogies-explained.html). More explanation of w2v is given [here](https://proceedings.neurips.cc/paper/2019/file/23755432da68528f115c9633c0d7834f-Paper.pdf). --- The dimensions of word2vec (or GloVe, etc) word embeddings are not directly interpretable, but capture correlations in word statistics, which reflect meaningful semantics (e.g. similarity), so some dimensions may happen to be interpretable. The embedding of a word is effectively a low-rank projection of the co-occurrence statistics of that word with all other words (like what you would get from PCA/SVD - but that would require an unweighted least square loss function). That projection in word2vec is probability weighted and non-linear, making it difficult to interpret what any dimension "means". Also, if the embedding matrix $W$ (all embeddings $w_i$ stacked together) is rotated by any rotation matrix $R$, and $R^{-1}$ applied to the other embedding matrix $C$, the transformed embeddings perform identically. So there isn't a unique solution, but an equivalence class of solutions, meaning the values in embeddings aren't necessarily meaningful in their own right, only when considered relative to each other. The theoretical explanation of analogies is too long to summarise here, but it boils down to word embeddings capturing log probabilities, so adding embeddings is equivalent to multiplying probabilities and so is meaningful. I gather it's bad form to include link explanations, but the two linked research papers should last in perpetuity.
118811
1
118816
null
0
27
How can I compute the FPR for sentences with no labels? Is there any relation between the FPR and the likelihood?
How can I calculate FPR?
CC BY-SA 4.0
null
2023-02-26T09:34:27.900
2023-02-26T15:25:47.713
null
null
141023
[ "machine-learning" ]
By definition, you must have labels to compute the false positive rate (FPR). If you don't have labels, you can't possibly compute it, because you don't know if the positive or negative predictions of your model are true or false. $$ FPR=\frac{\mathrm{FP}}{\mathrm{FP} + \mathrm{TN}} $$ (FP = false positives, TN = true negatives) The likelihood represents the probability of a model, given some parameters, to match the data. When training a model, you usually find the parameters that maximize the likelihood of the data or, equivalently, minimize the negative log-likelihood; this method is called maximum likelihood estimation. To compute the likelihood, you also need labels. Its mathematical definition depends on the specific model. Intuitively, the higher the likelihood, the lower the false positive rate.
How to compute F1 score?
First you need to learn about Logistic Regression, it is an algorithm that will assign weights to different features given some training data. Read the wiki intro, is quite helpful, basically the Betas there are the same as the Ws in the paper. The formula you have is correct, and those value do seem off. It also depends on the number of significant figures you have, perhaps they are making their calculations with more than the ones they are reporting. But honestly, you can't understand much of the paper unless you understand LR
118847
1
118849
null
0
22
I have a classification task that I'm currently getting really low accuracy metrics on (my highest accuracy score is about 20%). So far I've run 5 models: quadratic disc analysis, logistic regression, knn, random forest, and naive bayes (Gaussian but will try categorical soon). I've used GridSearchCV (10-folds) for all. My dataset has ~1500 data points with no more than 9 features. My only dummy variable covered gender, and I've already left one option out to avoid the dummy trap. My other explanatory variable is age group, which I've encoded to preserve order. Finally, my dependent variable is actually a vector (using multioutput from sklearn) of binary target variables. For more color on the dependent variable: the original feature was a question that allowed for 6 response choices but respondents could elect for multiple of them. I've essentially turned them into dummy variables (did not drop one of them) and turned them into a vector to predict using sklearn's multioutput tools. Any idea where I can improve the model?
What else can I do to help my model my classification task?
CC BY-SA 4.0
null
2023-02-27T17:12:23.407
2023-02-27T17:33:12.643
null
null
126297
[ "scikit-learn", "multiclass-classification" ]
There are several steps you can take to improve the performance of your classification model: - Data preprocessing: Ensure that the data is cleaned and preprocessed properly. Check for missing values, outliers, and ensure that the data is normalized or standardized if necessary (look at preprocessing module). - Feature engineering: Try to create new features that can better represent the underlying patterns in the data (look at decomposition module). - Model selection: Consider using more advanced machine learning algorithms such as gradient boosting (xgboost), Random Forest, or NN. Complex algorithms can often capture more complex relationships in the data. - Hyperparameter tuning: Optimize the hyperparameters of your models using cross-validation techniques such as GridSearchCV and RandomizedCV (look at model selection) - Address class imbalance: If your data is imbalanced, you can try techniques such as oversampling or undersampling to balance the classes (understand also how to stratify your data) - Evaluate performance metrics: Look beyond accuracy and evaluate other performance metrics such as precision, recall and F1-score to get a better understanding of how well your model is performing. Finally, it's important to keep in mind that there may be limitations to what can be achieved with the available data. If the underlying patterns are inherently complex or noisy, it may be difficult to achieve high levels of accuracy. In such cases, it's important to carefully evaluate the performance metrics and determine what level of performance is acceptable for your use case (compare your results with a dumb (random) classifier).
Looking for other opinions on approach to classification problem
You should use text classification techniques. The most basic one is multinomial naive Bayes classifier with tf-idf features. for this method, take a look at this: [https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html) If you don’t get enough accuracy (or maybe precision, recall or f-score), you could test more complex techniques e.g. using deep LSTM networks with word embedding. For this method, take a look at this: [https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/](https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/)
118855
1
118866
null
1
153
So in most blogs or books touching upon the topic of encoder-decoder architectures the authors usually say that the last hidden state(s) of the encoder is passed as input to the decoder and the encoder output is discarded. They skim over that topic only dropping that sentence about encoder outputs being discarded and that's it. It makes me confused as hell and even more so, because I'm also reading that in transformer models the encoder output is actually fed to the decoder, but since that's the only thing coming out of an non-rnn encoder, no surprise here. How I understand it all is that in transformer architectures the encoder returns "enriched features". If so, then in classical E-D architecture encoder returns just features. Why then is the output of the encoder model ignored in the non-transformer architecture? What does it represent?
What does the output of an encoder in encoder-decoder model represent?
CC BY-SA 4.0
null
2023-02-27T19:40:59.920
2023-02-28T00:11:39.670
null
null
139235
[ "deep-learning", "transformer", "encoder" ]
### Encoder-decoder with RNNs With RNNs, you can either use the hidden state of the encoder's last time step (i.e. `return_sequences=False` in Keras) or use the outputs/hidden states of all the time steps (i.e. `return_sequences=True` in Keras) : - If you are just using the last one, it will be used as the initial hidden step of the decoder. With this approach, you are training the model to cram all the information of the source sequence in a single vector; this usually results in degraded result quality. - If you are using all the encoder states, then you need to combine them with an attention mechanism, like Bahdanau attention or Luong attention (see their differences). With this approach, you have N vectors to represent the source sequence and it gets better results than with just the last hidden state, but it requires to keep more things in memory. The output at every time step is a combination of the information at the token at that position and the previous ones (because RNNs process data sequentially). ### Encoder-decoder with Transformers The encoder output is always the outputs of the last self-attention block at every time step. These vectors are received by each decoder self-attention layer and combined with the target-side information. The information of all tokens of the encoder is combined at every time step through all the self-attention layers, so we don't obtain a representation of the original tokens, but a combination of them. [](https://i.stack.imgur.com/4aSqT.png)
How does an encoder-decoder network work?
It probably won't. The whole point of the training was to encode cat images and thus the network has tried to learn what information is the most necessary to keep to ensure a low reconstruction error (i.e. what separates one cat from another) and what information can it throw away (i.e. what characteristics appear in all cat images and can be discarded). That being said, a dog image would produce a fairly decent reconstruction because most features are shared between both animals. If you try, however, to reconstruct something completely different (e.g. a car) then it would probably fail.
118878
1
118881
null
0
42
I am often confused about the lstm with more than one layer. Imagine i have two lstm layer with 3 cells each layer. What is exactly the input to the second lstm layer ?
What is exactly the input to a second lstm layer?
CC BY-SA 4.0
null
2023-02-28T10:23:37.323
2023-02-28T11:05:41.933
null
null
145940
[ "machine-learning", "time-series", "lstm", "sequence-to-sequence", "time" ]
The input to the second LSTM layer is the output at each time step of the first LSTM layer.
What is the input of LSTM network?
The input of an LSTM is a sequence of vectors. In your case, each of these vectors represents a word encoded as a one-hot vector. One-hot encoding is a way to express a discrete element (e.g. a word) numerically. Each one-hot vector is a vector of length $d$, where $d$ is the total number of words we can represent, and where all positions in the vector are 0 except the position associated with the represented word, which contains a 1. The hidden state passed to the next LSTM cell is not the final binary prediction, but the dense numerical vectors we obtain before computing the binary prediction.
118900
1
118912
null
0
49
Im trying to brighten and dim an image using OpenCV with two approaches. Approach 1: Used OpenCV's `add` and `subtract` functions to brigthen and dim the image. Approach 2: Manually added/subtracted pixel values. Both produced different results, with the OpenCV approach producing better results than the manual as you can see in the output image below. Why is this happening?. Code: ``` img = cv2.imread('New_Zealand_Coast.jpg',cv2.IMREAD_COLOR) img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Matrix for storing pixel alter values mat = np.ones(img.shape, dtype='uint8') * 70 # Approach 1 img_brighter = cv2.add(img, mat) img_dimmer = cv2.subtract(img, mat) # Approach 2 img_brighter_manual = img + mat img_dimmer_manual = img - mat # Plot Approach 1 plt.figure(figsize=(20,6)) plt.subplot(131) plt.imshow(img) plt.title('Original 1') plt.subplot(132) plt.imshow(img_brighter) plt.title('Brighter') plt.subplot(133) plt.imshow(img_dimmer) plt.title('Dimmer') plt.suptitle('Approach 1 : With CV Add/Subtract Function') # Plot Approach 2 plt.figure(figsize=(20,6)) plt.subplot(131) plt.imshow(img) plt.title('Original 1') plt.subplot(132) plt.imshow(img_brighter_manual) plt.title('Brighter') plt.subplot(133) plt.imshow(img_dimmer_manual) plt.title('Dimmer') plt.suptitle('Approach 2 : With manual Add/Subtract') ``` Output: [](https://i.stack.imgur.com/KgKOd.png) Original Image Array: [](https://i.stack.imgur.com/eMT45.png) Modified array from Approach 1: [](https://i.stack.imgur.com/eBO7D.png) Modified array from Approach 2: [](https://i.stack.imgur.com/c25wd.png) As you can see, there is some difference in the elements of arrays produced by the two approaches. Here is the original image used in the code, in case you want to test with it. [](https://i.stack.imgur.com/oaecA.jpg)
OpenCV add/subtract functions produce different results from numpy array add/subtract
CC BY-SA 4.0
null
2023-03-01T09:02:24.083
2023-03-01T18:47:53.600
2023-03-01T09:11:23.070
145236
145236
[ "python", "numpy", "image-preprocessing", "opencv", "image" ]
The issue is caused by the fact that the resulting value of the manual addition is larger than the maximum value it can store. The data type of the array is `uint8`, which can hold a maximum value of 255, whereas the resulting value of the addition (for the one shown in your screenshot) is 188 + 70 = 258. This will cause an overflow where the value will wrap around and start from zero, resulting in a value of 258 - 256 (256 because the range of values it can store is 0-255) = 2. A way to solve this problem is by performing an extra check that checks if the resulting value is lower than the original value, and if it is, set the output value to 255: ``` # Approach 2 img_brighter_manual = img + mat # extra check on output values img_brighter_manual[img_brighter_manual < img] = 255 ``` The same approach can be applied when darkening the image (i.e. subtracting values), except for the other way around.
Numpy failing in subtraction even after same dimensions of arrays
This is a problem that I have also run into before, right now your ytrain is a one dimensional array (advisable to avoid). Check [this](https://stackoverflow.com/a/39694829/10733051) answer. expanding(adding) additional dimension while assigning ytrain should fix your problem ``` x = np.array([1, 2]) x.shape (2,) y = np.expand_dims(x, axis=1) y array([[1], [2]]) y.shape (2, 1) ```
118910
1
120027
null
2
19
Recently I managed to create simple neural network visualization, to help to understand how neural network works on the signal level. I also wanted to arrange neurons by similarity cause I was expecting them to have noticeable areas of responsibility. see how it looks: [http://nn.3dev.io](http://nn.3dev.io) in order to see the distribution I've created two metrics: - Euclidean : calculates the distance in output weights space (10 dimensions) and repels neurons according to that distance, as well as attracts neurons which are close in that 10d space - output dominance : that attracts neurons having maximum weight at the particular output. The problem is that I don't see any trend (or tendency) in neuron distribution which may be caused by : - there is no such trend or noticeable areas of neuron responsibility (at least in this case) - I have improper metric Guys, what do you think about it? Thanks, Regards
Im looking for good neurons silmilarity metric
CC BY-SA 4.0
null
2023-03-01T16:38:05.423
2023-03-07T17:47:29.743
2023-03-02T08:02:38.667
146537
146537
[ "neural-network", "visualization", "unsupervised-learning" ]
The animation is pretty! I think with a fully-connected network and a single hidden layer you would not expect to see any strong patterns of neuron responsibility emerge. But if you used two or three hidden layers, you might start to see some patterns in the 3rd layer. Additionally, if you use a high L1 regularization when training, then very small contributions will go to 0. This encourages the model to make each decision with fewer, more dedicated, neurons. So you might see more distinct patterns. If you train a CNN then the lower-levels will be discovering features, such as straight lines and curves. Higher-levels might combine these to discover loops, etc. You would then expect to see trends in the final linear layer. E.g. all the zeroes are reacting to discovering one large loop. The 6s and 9s are reacting to discovering small loops and a line. The 1s and 7s don't react to any of the loop features. Etc.
What are the performance measures in the neural networks field?
Typical predictive performance measures used to compare accuracy of ANN models are: - RMSE - (root mean squared error) measures the distance between estimated and actual outcomes. Other metrics that measure the same concept are MSE, MAE or MPE. - R square - (R^2 or coefficient of determination) measures the reduction of variance when using the model. When comparing two different ANN models for performance, metrics that take into account the complexity of the model may be used, such as AIC or BIC.
119934
1
119943
null
0
19
I am on a survey about image retrieval datasets. I have found some, such as: - NUS-WIDE - Oxford5k - Oxford105k - Paris6k - MSCOCO I have been way too confused about the detection metrics and the metrics they use inside these datasets for image retrieval purposes. For example: - AP - Recall - Precison - Mean Average Precision My question is, how I can identify the metrics that these datasets use for image retrieval purposes.
Survey on image retrieval datasets
CC BY-SA 4.0
null
2023-03-03T14:45:51.537
2023-03-03T21:52:45.710
2023-03-03T19:03:52.333
75157
147563
[ "machine-learning", "dataset", "computer-vision", "information-retrieval" ]
You can check in [papers with code](https://paperswithcode.com/), in the ["image retrieval" category](https://paperswithcode.com/task/image-retrieval): [](https://i.stack.imgur.com/VMDJL.png) There, you will find multiple datasets and, for each one, the best-performing models according to certain metric, with links to the associated article and source code, e.g.: [](https://i.stack.imgur.com/LyEIP.png)
Classification of very similar images
I found the answer in the paper linked above. The authors use a CNN to solve the problem. I will post the code. [https://link.springer.com/article/10.1007/s00170-017-0882-0](https://link.springer.com/article/10.1007/s00170-017-0882-0)
119946
1
119959
null
0
33
Good morning everyone. I have the following data: ``` import pandas as pd info = { 'states': [-1, -1, -1, 1, 1, -1, 0, 1, 1, 1], 'values': [34, 29, 28, 30, 35, 33, 33, 36, 40, 41] } df = pd.DataFrame(data=info) print(df) >>> states values 0 -1 34 1 -1 29 2 -1 28 3 1 30 4 1 35 5 -1 33 6 0 33 7 1 36 8 1 40 9 1 41 ``` I need to group the data using PANDAS (and/or higher order functions) (already did the exercise using for loops), I need to group the data having the "states" column as a guide. But the grouping should not be of all the data, I only need to group the data that is neighboring... as follows: Initial DataFrame: ``` states values 0 -1 34 ┐ 1 -1 29 │ Group this part (states = -1) 2 -1 28 ┘ 3 1 30 ┐ Group this part (states = 1) 4 1 35 ┘ 5 -1 33 'Group' this part (states = -1) 6 0 33 'Group' this part (states = 0) 7 1 36 ┐ 8 1 40 │ Group this part (states = 1) 9 1 41 ┘ ``` It would result in a DataFrame, with a grouping by segments (from the "states" column) and in another column the sum of the data (from the "values" column). Expected DataFrame: ``` states values 0 -1 91 (values=34+29+28) 1 1 65 (values=30+35) 2 -1 33 3 0 33 4 1 117 (values=36+40+41) ``` You who are more versed in these issues, perhaps you can help me perform this operation. Thank you so much!
Group rows partially [Python] [Pandas]
CC BY-SA 4.0
null
2023-03-04T03:05:32.477
2023-03-04T18:16:02.460
2023-03-04T03:10:58.640
117522
117522
[ "python", "pandas", "python-3.x", "groupby" ]
You can do this by making use of `shift` to create different groups based on consecutive values of the `states` column, after which you can use `groupby` to add the values in the `values` column: ``` ( df .assign( # create groups by checking if value is different from previous value group = lambda x: (x["states"] != x["states"].shift()).cumsum() ) # group by the states and group columns .groupby(["states", "group"]) # sum the values in the values column .agg({"values": "sum"}) .reset_index() # select the columns needed in the final output .loc[:, ["group", "values"]] ) ```
How can I group by elements in a column in pandas?
If I understand what you need, I think it is this: ``` b = a.pivot_table(values='TOTAL_BALANCE_EUR', index=['NSFR_GROUP', 'BALANCE_GROUP'], columns='GAP', aggfunc='sum') b ``` It's easier for others to help you if you make the data available to others. Just make a tiny dataframe with 10 rows for instance. Also, you can make the code a bit easier to read by enclosing it in three backticks: ``` or using the code sample button when you write your post.
119947
1
119948
null
0
40
I want to try `deepspeech` model. I founded only english pre-trained model Are there any other pre-trained not english model of `deepspeech` ?
Are there any pre-trained non english model of deepspeech?
CC BY-SA 4.0
null
2023-03-04T07:19:05.317
2023-03-04T09:31:31.173
null
null
93617
[ "deep-learning", "speech-to-text" ]
You can check [Coqui](https://github.com/coqui-ai/STT), a fork of Mozilla DeepSpeech created by former Mozilla DeepSpeech developers. It is well maintained and [there are models for a lot of languages](https://coqui.ai/models).
Is applying pre-trained model on a different type of corpus called transfer learning?
The basic concept of transfer learning is: Storing knowledge gained from solving one problem and applying it to different but related problem I guess to be precise this is called Transductive Transfer Learning. In this we learn from the already observed training dataset and then predict the labels of the testing dataset. Even though we do not know the labels of the testing datasets, we can make use of the patterns and additional information present in this data during the learning process. Refer: [Ruder](https://ruder.io/thesis/neural_transfer_learning_for_nlp.pdf#page=64)
119952
1
119961
null
0
89
The typical default for neural networks in natural language processing has been to take words as tokens. OpenAI Codex is based on GPT-3, but also deals with source code. For source code in general, there is no corresponding obvious choice of tokens, because each programming language has different rules for tokenizing. I don't get the impression Codex uses a separate tokenizer for each language. What does it take as tokens?
What does Codex take as tokens?
CC BY-SA 4.0
null
2023-03-04T14:47:45.290
2023-03-04T18:50:48.033
null
null
15728
[ "machine-learning", "neural-network", "language-model", "tokenization" ]
NLP neural networks don't use word tokens any more. It's been a while since the norm is using subwords. Usual approaches to define the subword vocabulary are [byte-pair encoding (BPE)](https://aclanthology.org/P16-1162/), [word pieces](https://huggingface.co/course/chapter6/6?fw=pt) or [unigram tokenization](https://datascience.stackexchange.com/a/88831/14675). [GPT-3 uses BPE tokenization](https://arxiv.org/abs/2005.14165). According to the OpenAI's tokenizer tool website: > Codex models use a different set of encodings that handle whitespace more efficiently From this, I understand that they use BPE but with a different vocabulary. This is supported by [this javascript tokenizer](https://github.com/botisan-ai/gpt3-tokenizer) that was created by extracting the BPE vocabulary from OpenAI's own online tokenizer tool.
What is purpose of the [CLS] token and why is its encoding output important?
CLS stands for classification and its there to represent sentence-level classification. In short in order to make pooling scheme of BERT work this tag was introduced. I suggest reading up on this [blog](https://datasciencetoday.net/index.php/en-us/nlp/211-paper-dissected-bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-explained) where this is also covered in detail.
119953
1
119957
null
1
55
I'm currently trying to fully understand what a Conv2D layer actually does and I think I got most of it. But theres one thing I don't quite get. When reading about Kernels there were multiple mentions that for example a `(3,3)` kernel, like this one \begin{bmatrix} 0 & -1 & 0 \\\\ -1 & 5 & -1 \\\\ 0 & -1 & 0 \end{bmatrix} is useful for e.g. sharpening an image. But when I want to use a Conv2D layer in TensorFlow I don't have to specify this anywhere. Something like this is sufficient: `tf.keras.layers.Conv2D(4, 3, padding="same", activation="relu")` So what values are used for the kernel by default? Or am I missunderstanding this whole thing?
What values are actually used for the kernel when using a Conv2D layer?
CC BY-SA 4.0
null
2023-03-04T15:04:50.767
2023-03-07T11:15:57.193
2023-03-07T11:15:57.193
143513
143513
[ "machine-learning", "keras", "tensorflow" ]
The values are learned during the training. Initially, they are given random values, then by means of back-propagation and gradient descent, the values of the kernels are adjusted during several iterations to make the final results as close as possible to the expected results.
Understanding declared parameters in my Conv2d layer of my convolutional neural network
Bot the `name` and `input_shape` come from the [Layer class](https://keras.io/api/layers/base_layer/#layer-class) which Conv2D inherited. In the doc you provide, they are implicitly in `**kwargs`
119974
1
119995
null
0
38
I have a long time serie, let's say 1000 items. I want to find patterns in it of different lengths from 10 to 100 elements. To do this, I extract sliding windows of different lengths and calculate distance matrix between them using DTW. But it works very slowly. Can you please tell me if there is a more efficient method? My code: ``` def generate_sliding_windows(series, window_size): sliding_windows = [] for i in range(0, len(series) - window_size + 1): window = tuple(series[i:i + window_size]) indexes = (i, i + window_size) sliding_windows.append( SlidingWindow(data=np.array(window), indexes=np.array(indexes)) ) return sliding_windows sliding_windows = [] for window_size in range(10, 100): sliding_windows.extend( generate_sliding_windows(time_serie, window_size)) n = len(sliding_windows) distance_matrix = np.zeros((n, n)) for i in range(n): for j in range(i + 1, n): dist = fastdtw(x, y, dist=euclidean)[0] distance_matrix[i][j] = dist distance_matrix[j][i] = dist ``` The slowest piece of code here is the calculation of the distance matrix.
Patterns extraction in time serie with DTW
CC BY-SA 4.0
null
2023-03-05T15:42:05.263
2023-03-06T11:50:40.050
null
null
147618
[ "time-series", "clustering", "dynamic-time-warping" ]
If you haven't done so already, the first thing I suggest doing is checking to see how many sliding windows you are generating, and therefore how many DTW distances you are calculating. Also, do you really want to calculate the distance between all possible subsequences of lengths 10 to 100? What will be the DTW distance between, say, the subsequences `time_series[10:20]` and `time_series[10:21]` and is this relevant to what you are trying to do? If this is still taking too long after you've cut down the number of DTW distances you are calculating, I'd suggest looking at lower-bounding techniques. Lower-bounding established a lower bound on the DTW distance using a calculation that is much faster than DTW. If the lower bound distance doesn't meet the criterion for inclusion as a pattern, then you know there's no point calculating the full DTW distance. Then you only need to calculate the expensive DTW distance when the lower bound distance meets the criterion. A couple of references with more information about lower bounding are: - Rakthanmanon et al.: Searching and mining trillions of time series subsequences under dynamic time warping - Tan et al.: Elastic bands across the path: A new framework and method to lower bound DTW
If a time series has random time events, how to detect patterns?
As a first step, to segregate the messages that appear to be a bot, you could first try binning by message size. For example, if messages sent by bots are likely to be around 128 bytes to 140 bytes, assign these to a unique bin. Next, create a time series based on this bin. Try to decompose the time series using an additive or multiplicative method such as Holt Winters. A strong seasonal component would help you identify regular and repetitive messages which are being generated automatically.
120034
1
120070
null
1
136
I'd like to perform clustering analysis on a dataset with 1,300 columns and 500,000 rows. I've seen that clustering algorithms are available in [SciKit-Learn](https://scikit-learn.org/stable/modules/clustering.html). But I'm worried that the algorithms will be inefficient on a dataset of this size. Is SciKit-Learn slow, and, if it is, what's the best (fastest) clustering package available in Python?
What's the fastest clustering package in Python?
CC BY-SA 4.0
null
2023-03-07T23:39:46.700
2023-03-20T13:27:46.467
2023-03-08T10:17:02.750
138508
138508
[ "scikit-learn", "clustering", "unsupervised-learning", "efficiency", "spectral-clustering" ]
Depending on your platform, processor, memory, etc, you may want to check out [https://www.intel.com/content/www/us/en/developer/tools/oneapi/scikit-learn.html](https://www.intel.com/content/www/us/en/developer/tools/oneapi/scikit-learn.html) Some of the clustering algorithms are highly optimized.
Are there any python libraries for sequences clustering?
> Is there libraries to analyze sequence with python You can take a look at [here](http://biopython.org/DIST/docs/tutorial/Tutorial.html). You can also use `TensorFlow` if your task is sequence classification, but based on comments you have referred that your task is unsupervised. Actually, `LSTMs` can be used for unsupervised tasks too depending on what you want. Take a look at [here](https://scholar.google.com/scholar?q=lstm+for+clustering&hl=en&as_sdt=0&as_vis=1&oi=scholart&sa=X&ved=0ahUKEwiiuNmBhZ7aAhXRIlAKHV4xBlMQgQMIJDAA). > And is it right way to use Hidden Markov Models to cluster sequences? Markov hidden models are those that your current state does not depend on all previous states. If you your task has longterm dependencies, you can use `LSTM` networks. If your data does not have longterm dependencies you can use simple `RNNs`.
120056
1
120078
null
0
19
I have a pandas data frame containing around 100000 observations of plant species and their age with additional numerical predictors (climate). I used `tensorflow` and `keras` to build a `sequential` model to predict species `age`. Here is the code for my basic regression network: ``` # Create model structure model = Sequential([ Dense(128,activation='relu', kernel_regularizer=regularizers.L1L2(l1=1e-3,l2=1e-4)), Dense(64,activation='relu', kernel_regularizer=regularizers.L1L2(l1=1e-3,l2=1e-4)), Dense(32,activation='relu', kernel_regularizer=regularizers.L1L2(l1=1e-3,l2=1e-4)), Dense(16,activation='relu', kernel_regularizer=regularizers.L1L2(l1=1e-3,l2=1e-4)), Dense(1,activation='relu') ]) # Compile the models model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mae, metrics=['mae']) # Train random.seed(132) model_trained = model.fit(X_train, y_train, epochs = 200, validation_split = 0.15, verbose = 0) ``` I one hot encode the species data and normalize all variables before training. There is around 110 species, so after one hot encoding them, I have a training dataset with around 900000 rows and 180 columns. For training, I thought that including data for all the species in one data frame, will allow better performance, because the model is learning from more data at once. The model does not show signs of overfitting and the validation, testing and training MAE is all very similar. Seems good! [](https://i.stack.imgur.com/EKDHP.png) However, when I try and make predictions, using different numerical data (future climate), the `age` predictions are all the same for different species. E.g. When I set up my new data frame for prediction, I encode the data frame to be for a specific species i.e. `Species_1` is present, therefore all other species are not. ``` # Species_1 prediction [[23.043112] [23.11334 ] [24.231022] [23.026756] [25.771097]] ``` ``` # Species_2 prediction [[23.043112] [23.11334 ] [24.231022] [23.026756] [25.771097]] ``` The fact that the predictions for different species are identical, makes me think that the network did not use the one hot encoded species information during training, or rather that the species information was not considered important in predicting age and therefore received a lower importance. I am by no means an expert in neural networks, and I am still learning. My goal is to make age predictions for different species under different climate scenarios. How do I force the network to always use the species data during training? Am I wrong to use all species data in one data frame and should rather split the data frames per species separately? Should I rather use other machine learning algorithms (Random Forest)? Is the network too complex, and therefore memorizes the response variable, or the relationship between influential predictors and the response? Any tips would be appreciated.
Force network to weigh specific variables during learning
CC BY-SA 4.0
null
2023-03-08T19:08:21.723
2023-03-09T17:09:58.137
2023-03-08T19:29:26.810
101455
101455
[ "python", "neural-network", "tensorflow", "pandas", "prediction" ]
After gaining better understanding of the problem at hand, here is my solution to deep encode my species categories so that the network can learn from them. ``` # Perform deep encoding on species names # Create test data frame containing y and Species test = pd.concat([y, X['Species']], axis=1) # Convert species names to ordinal numbers test['Species_ord'] = pd.Categorical(test['Species'], categories=test['Species'].unique(), ordered=True).codes # Define embedding parameters m = len(test['Species'].unique()) embedding_size = min(50,m+1/2) # Create model structure model = Sequential([ Embedding(input_dim = m, output_dim = embedding_size, input_length = 1, name="embedding"), Flatten(), Dense(32,activation="relu"), Dense(1,activation="relu") ]) # Compile the models model.compile(optimizer=tf.keras.optimizers.Adam(), loss=tf.keras.losses.mae, metrics=['mae']) # Train random.seed(132) model.fit(x = test['Species_ord'], y = test['Age'], epochs = 50, batch_size = 16, verbose = 1) # Grab the new embedded variables for species Species_embedded = model.get_layer('embedding').get_weights() Species_embedded_df = pd.DataFrame(Species_embedded[0]) # Change name of embeddings Species_embedded_df = Species_embedded_df.rename(columns=lambda x: f"Species_encode_{x}") # Add species names to embeddings as ID Species_embedded_df["Species"] = test['Species'].unique() # Save to disk for later use Species_embedded_df.to_csv(dir + "3-Resultats/Tables/Tree_Species_Embeddings.csv", index=False) ``` ```
Neural networks - adjusting weights
I'm not an expert on the backpropagation algorithm, however I can explain something. Every neural network can update it's weights. It may do this in different ways, but it can. This is called backpropagation, regardless of the network architecture. A feed forward network is a regular network, as seen in your picture. A value is received by a neuron, then passed on to the next one. [](https://i.stack.imgur.com/nDUJ4.jpg) A recurrent neural network is almost the same as a FFN, the difference being that the RNN has some connections point 'backwards'. E.g. a neuron is connecteded to a neuron that has already done his 'job' during backpropagation. Because of this, the activations of the previous output have an effect on the new output. [](https://i.stack.imgur.com/4x8Bb.png) On question #2 Interesting question. This has to do with weight initialization. Yes, you're right, each neuron in the hidden layer accepts the same connections. However, during the initalization process, they have received a random weight. Depending on your NN libary, the neurons might also have been initialized with a random bias. So even though the same rule is applied, each neuron has different outcomes as all it's connections have different weights than the other neurons weights. On your comment: just because all the neurons happen to have the same backpropagation function, doesn't mean they will end up with the same weights. As they are initialized with random weights, each neurons `error` is different. Thus they have a different gradient, and will get new weights. You also have to keep in mind that for a certain output to be reached, there are multiple solutions (due to non-linearity). So due to initialized random weights, one neuron might be close to a certain solution while another neuron is closer to the other. Additionally, as was stated in the comments, a network works as a whole. The output neuron is also non-linear, and for most test cases, the output should be non-linear and the output neuron most likely requires that the hidden neurons activate at different input values.
120059
1
120060
null
0
32
I'm working on a project that aims to classify JIRA issues into their relevant owner group. An issue has the following text features: - Summary - Description - Comments all of which are text based. During prediction time, however, no comments are available as the goal of the tool is to predict the owner without having the users to assign it back and forth between them. I wonder if that's ok to use the comments only for the training dataset but completely ignore it on the test dataset. Is that considers data leakage? I'm having a bad feeling it may be so. I've asked a very similar question a while back and I was told I can benefit the comments data during training as it will enrich the vocabulary. Needless to say, I'm getting far better results when using them (in both datasets) I was thinking maybe it is ok to split the dataset into train and test and dropping the `comments` column from the train dataset Is it ok to use it or am I just cheating myself? ADDING SOME CLARIFICATION: The 3rd feature (AKA comments) as well as the other ones eventually being united into 1 column called `text` so it's pretty much having this "extra" text during training VS not having it during prediction time
using a feature that is only available during training
CC BY-SA 4.0
null
2023-03-08T22:26:33.430
2023-03-08T22:54:12.237
2023-03-08T22:54:12.237
109113
109113
[ "scikit-learn", "dataset", "text-classification", "data-leakage" ]
Using the comments doesn't really make sense if they will not be available for prediction. If you add the feature you will end up having to impute the comment features in some way. How do you plan on doing this? Also when evaluating your model, you should also discard the comments and impute the missing values to get realistic evaluation metrics. I am pretty sure adding them will add no predictive power to your model, the only think that could happen is overfitting.
relaying on feature during training that won't (necessarily) be available during prediction
3 points: - If the feature is certainly or very most of the time is not available during prediction the no you cant use it - If it is sometimes available and sometimes not, you must include no-comment bugs into your training as well and choose a default value which means no-comment (e.g. 'no-comment' string! or None) - In case they are available only for training, you can still benefit from them during EDA. Extracting keywords, topics, etc. from them will help you understand the situation of different labels and possibly helps you validate your labels, score them and/or understand the relation between other features (by clustering-like analysis) - If you want to use them, be careful how you do it. You have a bunch of categorical and/or numerical features and if you want to put text feature next to them you need to thing about feature representation. For example if you want to use TF-IDF, you suddenly introduce potentially thousands of features which may kill the information of your main features. So try to make it as sparse as possible like extracting keywords or topic from those texts and use those as categories. If you use any embedding to model them, check if you need to normalize your feature set as the scale of embedding values and other numerical features might be different Hope it helped. Good Luck!
120073
1
120131
null
2
200
I have created a dataset with my geolocations from the last three months. The data set contains longitude, latitude, and timestamp, with a frequency of every 5 minutes. Based on this data, I want to predict my (geo)location for up to two weeks into the future. I would like to end up with several predictions with a degree of certainty. I see two options: - Translate it into a classification problem with discrete target locations such as home, workplace, gym. Furthermore, I would have to create features that describe when I was at some place, basically translating the temporal dimension into features. These features could include hour of day, day of the week, etc. I'm not sure if this method is able to capture a temporal trend, though. - Use forecasting models to forecast/predict the actual latitude and longitude coordinates. Then translate the predicted latitude/longitude into a location such as 'home' or 'work'. The biggest problem I see here is that this will give me only one location, instead of a list of locations with respective certainties. I am looking for more/better suggestions how I might do this. Thanks in advance!
How to forecast a timeseries with geolocation data?
CC BY-SA 4.0
null
2023-03-09T13:03:23.720
2023-03-16T09:31:40.600
2023-03-12T09:29:55.423
99260
99260
[ "machine-learning", "data-mining", "data-science-model", "forecasting", "geospatial" ]
I see multiple possibilities, here: #### In General Some general remarks first: - When designing you model, you should take reoccurring patterns into account: There will probably be a 24h pattern (for example: being at work has every 24h a similar probability, while 12h after being at work, the probability to be at work will be quite low). There might also be a 7 day pattern (e.g. every Wednesday evening one might go for sports). This can be done by extracting different features from the timestamp (the hour, the weekday) or choosing a suitable kernel / distance function. - There are variants: you can either predict all locations you might visit in the next 2 weeks (independent of when) or you might predict for each time (e.g. each day / each hour or even every 5 minutes) where you might be. With many approaches, both variants might be possible. #### Finite set of locations This is basically your option 1. Just some more ideas: - You might consider to to apply some clustering on your recorded data to find reoccurring locations and use these as target locations. - You do not need to transform the temporal dimension into features. There are techniques that can handle time series as input (e.g. recurrent neural networks like LSTMs or transformer networks) #### Rasterization You could put a raster over you area of interest (e.g. your city). It depends on you to choose an appropriate cell size. Now you can predict for each cell the probability that you will visit it. This will create kind of a heat-map. Choosing the raster-approach allows you to handle your data as a series of images, which allows for techniques such as CNNs. #### Gaussian Processes / Kriging Gaussian Processes (a.k.a Kriging in the field of geostatistics) allow to learn a probability distribution over the spatiotemporal space (which seem to be waht you are looking for). Unfortunately, they come with some disadvantages: - Gaussian Processes are better with interpolation than with extrapolation. You might get some strong uncertainties. - Gaussian Processes are computationally expensive. You probably have too much data and might have to subsample or compress it. - Originally, they are used for unbound regression with Gaussian distributions. You are looking more for classifications (will you be there). This can also be done, but requires some extra steps. Note: These are just some approaches and directions to look into.
Time series forecasting using multiple time series as training data
First cluster the events that have the most similarities. Then use a comparable (or more than one of then ) to forecast the sales of the new events that you do not have historical data. Use all other information you have as regressor. Here is a code to do the forecast in R. You will be able to combine different forecasting models with this code: ``` choose_model<-function(x,h,reg,new_reg,end_train,start_test){ library(forecast) library(forecastHybrid) library(tidyverse) #train data x_train <- window(x, end = end_train ) x_test <- window(x, start = start_test) #train and test for regressors reg_train <- window(reg, end = end_train ) reg_test <- window(reg, start = start_test) h1=length(x_test) #model1 stlf(x_train , method="arima",s.window= nrow(x_train),xreg = reg_train, newxreg = reg_test, h=h1)-> fc_stlf_xreg #model2 auto.arima(x_train, stepwise = FALSE, approximation = FALSE,xreg=reg_train)%>%forecast(h=h1,xreg=reg_test) -> fc_arima_xreg #model3 set.seed(12345)#for nnetar model nnetar(x_train, MaxNWts=nrow(x), xreg=reg_train)%>%forecast(h=h1, xreg=reg_test) -> fc_nnetar_xreg #model4 stlf(x_train , method= "ets",s.window= 12, h=h1)-> fc_stlf_ets #Combination mod1 <- lm(x_test ~ 0 + fc_stlf_xreg$mean + fc_arima_xreg$mean + fc_nnetar_xreg$mean + fc_stlf_ets$mean) mod2 <- lm(x_test/I(sum(coef(mod1))) ~ 0 + fc_stlf_xreg$mean + fc_arima_xreg$mean + fc_nnetar_xreg$mean + fc_stlf_ets$mean) #model1 stlf(x, method="arima",s.window= 12,xreg=reg, newxreg=new_reg, h=h)-> fc_stlf #model2 auto.arima(x, stepwise = FALSE, approximation = FALSE,xreg=reg)%>%forecast(h=h,xreg=new_reg) -> fc_arima #model3 set.seed(12345)#for nnetar model nnetar(x, MaxNWts=nrow(x), xreg=reg)%>%forecast(h=h, xreg=new_reg) -> fc_nnetar #model4 stlf(x , method= "ets",s.window= 12, h=h)-> fc_stlf_e #Combination Combi <- (mod2$coefficients[[1]]*fc_stlf$mean + mod2$coefficients[[2]]*fc_arima$mean + mod2$coefficients[[3]]*fc_nnetar$mean + mod2$coefficients[[4]]*fc_stlf_e$mean) return(Combi) } ```
120077
1
120116
null
1
30
I'm quite new in RL and I'm currently following David Silver's course on RL. But at the same time, I also want to get hands-on, so I followed this tutorial from Gymnasium documentation: [https://gymnasium.farama.org/tutorials/training_agents/reinforce_invpend_gym_v26/](https://gymnasium.farama.org/tutorials/training_agents/reinforce_invpend_gym_v26/) I understand the general concept and Idea, but I'm curious about why we should model the policy as a distribution (a Normal distribution in this case) and then take a sample from that distribution as an action to be applied to the RL environment. Why don't we just use the mean value as an action instead of taking a sample from distribution as an action? Here's the piece of code that I'm talking about: ``` def sample_action(self, state: np.ndarray) -> float: """Returns an action, conditioned on the policy and observation. Args: state: Observation from the environment Returns: action: Action to be performed """ state = torch.tensor(np.array([state])) action_means, action_stddevs = self.net(state) # create a normal distribution from the predicted # mean and standard deviation and sample an action distrib = Normal(action_means[0] + self.eps, action_stddevs[0] + self.eps) action = distrib.sample() prob = distrib.log_prob(action) action = action.numpy() self.probs.append(prob) return action ``` As an experiment, I have tried to change the action from `action = distrib.sample()` to `action = action_means[0]`. But it turns out that the model isn't learning. Does anyone has an idea?
Why use sampling instead of the mean value for policy in Reinforcement Learning?
CC BY-SA 4.0
null
2023-03-09T16:41:47.003
2023-03-11T13:24:53.663
2023-03-09T16:51:49.077
147761
147761
[ "machine-learning", "reinforcement-learning", "openai-gym" ]
If the mean is used, the value is approximately the same over time. Thus the actions will be very similar over time, providing less opportunity for the model to learn what other actions could be useful (aka, the explore in exploit-explore concept). If a sample is used, the value is random. The randomness is weighted by the value of previous actions. The current action exploits previously useful actions and explores also.
Confusion in Policy Iteration and Value iteration in Reinforcement learning in Dynamic Programming
In policy iteration, you define a starting policy and iterate towards the best one, by estimating the state value associated with the policy, and making changes to action choices. So the policy is explicitly stored and tracked on each major step. After each iteration of the policy, you re-calculate the value function for that policy to within a certain precision. That means you also work with value functions that measure actual policies. If you halted the iteration just after the value estimate, you would have a non-optimal policy and the value function for that policy. In value iteration, you implicitly solve for the state values under an ideal policy. There is no need to define an actual policy during the iterations, you can derive it at the end from the values that you calculate. You could if you wish, after any iteration, use the state values to determine what "current" policy is predicted. The values will likely not approximate the value function for that predicted policy, although towards the end they will probably be close.
120084
1
120086
null
0
49
My course has the following statement. ``` a = np.array([[3, 2, 5], [1, 4, 6]]) ``` > We select all columns of a matrix where the entry on row 1 is greater than the entry in row 0 ``` a[:, a[1, :] > a[0, :]] ``` I am confused about the colon being in the first position to mean "select all columns" and the colon also being in the second position for a[1, :] Is there a better way to say it for an SQL programmer?
How to read colon in python?
CC BY-SA 4.0
null
2023-03-09T21:07:42.653
2023-03-10T02:04:50.910
2023-03-09T21:12:00.557
117614
117614
[ "python", "self-study" ]
`x = a[1, :] > a[0, :]` takes the value `array([False, True, True])` as it finds columns for which, the values in the second line are larger than the values in the first line. Since `x` has boolean values, `a[:, x]` will extract, across all rows (because the colon at the first dimension of the array), all the columns for which `x` takes the value `True`, in this case, the last two columns.
Problem reading python code
See sections [3.2 and 3.3 of the geojson standard](https://www.rfc-editor.org/rfc/rfc7946#section-3.2). It looks like `value_geojson` is a `FeatureCollection` object, which have a single member `"features"` which is a list of `Feature` objects. So ``` value_geojson["features"][0] ``` is now just the first `Feature` object. Those have a few members, including `"properties"`, which is an arbitrary JSON object. At that point we're outside of the geojson standard, so whatever `"title"` means is more context-specific.
120087
1
120095
null
1
44
I am relatively new on data science sector. I started tampering with random forest models and some strange occasions aroused. To be precise, I have data from sensors that record pollutants and a column that acts as the labels populated with data that shows if a person has developed allergies or not depicted with 0 or 1. The amount of data is 1300 people. So i have run some random forest classifier models and started to remove or add some columns. What i dont understand is that: 1. In a model with all pollutants as features and the column SYMPT as labels when i remove a random column (ex styrene) the order of the feature importance changes significantly, 2. In a model with all pollutants as features and the column SYMPT as labels when i remove a large number of columns or even let just 1 column of the pollutants the accuracy remains the same (about 71%) and lastly 3. when i added the column with AA (increasing serial number) data which was originaly excluded from the model along with SBR column, the accuracy increased. What can you understand from these results? [](https://i.stack.imgur.com/UY8PO.png)
Random Forest on high correlated data
CC BY-SA 4.0
null
2023-03-09T23:47:18.103
2023-03-10T13:47:49.163
2023-03-10T11:42:12.620
147776
147776
[ "machine-learning", "random-forest" ]
Welcome to the data science sector. Your three points seem to relate to different aspects. I will try to address all three: #### 1. Feature Importance To explain the effect, I would go the other way and start with a model trained on few features that are mainly uncorrelated (just for the sake of explaination). One our features will get the highest feature importance (let's call it A). To simplify things, this roughly means that the feature is used in a lot of important nodes of a lot of trees of our random forest. Now assume we add another feature (B) that is very strongly correlated with feature A. If we now train another random forest, it does make a huge difference whether feature A or feature B is used in a node. So approximately half of the time A is used and half of the time B is used. This means, A only appears in half of the nodes compared to the first model. Which would give A half of the feature importance it got before (again, we simplify things. Depending on the definition of feature importance, the effect might differ, but the general concept stays the same) Following the above explanation, one can see, that removing some features will affect the other feature's importance differently, depending on how strongly they were correlated with the removed features. This might already explain your observation. Conclusion: Feature importance measures only the importance for one concrete model. Not the value of the information behind the feature. #### 2. Accuracy It looks like you have some sort of imbalanced data. I suggest to look for the frequency of occurances of values in SYMPT. I assume there are just two values 0 and 1 and 71% of all values are labeled 1. In that case, a model that always says 1 would get an accuracy of 71%. So in that case, if you models would not lear much, all would get 71% accuracy. In such a case of implanaced data, one better uses othere metrics to evaluate the model, such as roc-auc, precision and recall (look a both), true- and false-positive-rate. #### 3. Adding Column AA It looks like there is a trend in your data. If the ID-Column is a valuable information, then the data might be sorted in some way. If you are not sure how the data is sorted, it would be the best to ignore the ID and to shuffle the data. Otherwise, you need to analyze the data to understand the ordering. Note: In scikit-learn you need to explicitly set set `shuffle=True` for cross validation (see for example [https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html)). I hope this helps you a bit to understand what is going on with your models
Random Forest Prediction
Seems like it predicts the first class. Sklearns Random Forest implementation generates probabilities for each class by averaging the probability predicted by each estimator into an array of shape (n_samples, n_classes), and then uses `np.take(np.argmax())` to select the highest probability, something akin to this: ``` # Pretend "a" is our averaged predictions for the forest. So the first sample is predicting 78% probability class 0, 22% class 1. The second has the probabilities reversed and the third is 50/50 split. a = np.array([[0.78, 0.22], [0.22, 0.78], [0.5, 0.5]]) np.argmax(a, axis=1) ``` The output is `array([0, 1, 0], dtype=int64)`. These are the indices of the highest value in each sample of the array, and for the sample with an even split you can see it's picking class 0. edit: If you want to see it yourself, the relevant piece of code is line 540 in forest.py of sklearn: [https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/forest.py](https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/ensemble/forest.py)
120089
1
120090
null
0
2288
The error message shows ``` RuntimeError: mat1 and mat2 shapes cannot be multiplied (32x32768 and 512x256) ``` I have built the following model: ``` def classifier_block(input, output, kernel_size, stride, last_layer=False): if not last_layer: x = nn.Sequential( nn.Conv2d(input, output, kernel_size, stride, padding=3), nn.BatchNorm2d(output), nn.LeakyReLU(0.2, inplace=True) ) else: x = nn.Sequential( nn.Conv2d(input, output, kernel_size, stride), nn.MaxPool2d(kernel_size=3, stride=2, padding=1) ) return x class Classifier(nn.Module): def __init__(self, input_dim, output): super(Classifier, self).__init__() self.classifier = nn.Sequential( classifier_block(input_dim, 64, 7, 2), classifier_block(64, 64, 3, 2), classifier_block(64, 128, 3, 2), classifier_block(128, 256, 3, 2), classifier_block(256, 512, 3, 2, True) ) print('CLF: ',self.classifier) self.linear = nn.Sequential( nn.Linear(512, 256), nn.ReLU(inplace=True), nn.Linear(256, 128), nn.ReLU(inplace=True), nn.Linear(128, 64), nn.ReLU(inplace=True), nn.Linear(64, output) ) print('Linear: ', self.linear) def forward(self, image): print('IMG: ', image.shape) x = self.classifier(image) print('X: ', x.shape) return self.linear(x.view(len(x), -1)) ``` The input images are of size `512x512`. Here is my training block: ``` loss_train = [] loss_val = [] for epoch in range(epochs): print('Epoch: {}/{}'.format(epoch, epochs)) total_train = 0 correct_train = 0 cumloss_train = 0 classifier.train() for batch, (x, y) in enumerate(train_loader): x = x.to(device) print(x.shape) print(y.shape) output = classifier(x) loss = criterion(output, y.to(device)) optimizer.zero_grad() loss.backward() optimizer.step() print('Loss: {}'.format(loss)) ``` Any advice would be much appreciated.
Pytorch mat1 and mat2 shapes cannot be multiplied
CC BY-SA 4.0
null
2023-03-10T03:08:47.403
2023-03-10T10:38:24.757
null
null
142618
[ "neural-network", "cnn", "convolutional-neural-network", "image-classification", "pytorch" ]
In `forward`, the image first passes through some convolutional layers (i.e. `self.classifier`), then it is flattened, then passed through some linear layers (i.e. `self.linear`). The problem is the dimensionality of the flattened tensor does not match the expected input for `self.linear`. The last dimension of the flatened tensor is expected to be 512 (see the `in_features` parameter of [nn.Linear](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html)), but it actually is 32768, according to the error you get. You may change `nn.Linear(512, 256)` with `nn.Linear(41472, 256)` to fix the mismatch.
2D Pytorch tensor doesn't have independent random values
You don't use the parameters "loc" and "scale" the right way. They are not suppose to be tensors. Bellow the right syntax : ``` dist = torch.distributions.StudentT(10, 0, 1) dist = torch.distributions.StudentT(10) # 0 and 1 are the default parameters ``` Then you can sample multiple values like that : ``` t = dist.rsample(torch.Size([n,m])) t = dist.rsample(torch.Size([n])) ```
120093
1
120097
null
0
70
I am trying to build a chess AI with a neural network. To learn about how neural networks work and refresh my programming experience. I have some experience with classifiers but not yet with neural networks, so feel free to correct wrong assumptions and mistakes I make. I plan to feed the neural net the current board position and all possible moves to choose from and after the game in the final position the game result. While coding out the chess game I came across 2 notations, a [FEN](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation) (board position) and a [PGN](https://en.wikipedia.org/wiki/Portable_Game_Notation) (move sequence that led to the current board position). This got me thinking, I initially chose FEN as data format to feed the net because that looked easier and lighter to process. I am now wondering however if there is an added value for the neural net to know the sequence in which a position came to be (or does it know this due to the nature of reinforcement learning?). Take a checkmate for example. For us humans we can look backward and realize that the previous position was a 'mate in one' move. But will the neural net be able to relate the two positions? (and be able to 'look ahead' because that move led to success when it previously encountered it so it is reinforced in its structure?) I can imagine that it doesn't matter because the net encountered them in sequence, so it will build up the association to that position (if this makes sense) but I am a bit unsure. so to summarize, do you think I should use a FEN to feed in the current position or a PGN, or maybe even both?
Which chess notation to feed neural network: FEN or PGN?
CC BY-SA 4.0
null
2023-03-10T10:39:42.720
2023-03-10T15:16:26.627
2023-03-10T14:16:23.950
43508
43508
[ "neural-network", "feature-engineering" ]
Since PGN is longer and possible harder to process, you should wonder whether it transports more information than FEN. In this case, the question is: does it matter in which way you reached the current board positions? Does it influence the next move? There are only two situation that come to my mind: - Castling: (neither king nor rook may have moved, previously) - Threefold repetition At least the first one could easily be covered by an additional flag. In general, existing notions might not be best suited for neural networks. If you just want to cover the current board position, I would decide for a fixed-size input, e.g. store for each field, what figure is placed on it. You might then for example encode the figure with one-hot-encoding. that would give you 8x8x(#figures) as input. An approach like this will make live easier for you. If you aim to store history then PGN (or something similar but in a numeric format) would be a suitable input. But then you might have to deal with input of dynamic length.
Layer notation for feed forward neural networks
Well, the image you sent is not nicely denominated. The first layer in the image is $x_0$ which is the input consisting of d dimensions, it is actually the first sample of the training set. Here its dimensions are $x_{01}, x_{02}, x_{03}, x_{04}$ (the green nodes in the left, hence, d equals $4$). Then the next layer which is called $x_1$ is the first hidden layer and subsequently $x_2$ is the second hidden layer and $x_3$ is the output of this `feed-forward network`. By this definition, $x_0$ is the input with d dimensions $x_{01}, x_{02}, x_{03}, x_{04}$ and for calculating each node in the proceding hidden layer which is called $x_1$ here, we should do: consider the most up node in hidden layer $x_1$ as the node we want to figure its value. We call it $x_{11}$, first we compute a linear computation of weights and inputs and then we apply some activation function $\sigma$ to it: $$x_{11} = \sigma(x_{01} \cdot w_{11} + x_{02} \cdot w_{12} + x_{03} \cdot w_{13} + x_{04} \cdot w_{14})$$ - also, an offset might be added to this expression. - consider that hidden layers can be of any size. > Each one of the $d$ features in every example in the training set. In this case, $n_0 = d$ and $x_0$ is $(d \times 1)$. $n_0 = d$ and $x_0$ is $(d \times 1)$ is right and in the first layer, yes each node is depicting a single one of d features of the input. but not for the hidden layers. > Each example of the training set, which is $d-$dimensional. In this case, $n_0$ is the number of examples and $x_0$ is $(d \times n_0)$. No as I mentioned, this is an architecture that depicts the process for a single training set. Hence, each node is not a sample of the training set. You set $n_0$ for the number of nodes in the first layer which is input. so $n_0$ here equals d and $x_0$ which is the input equals $(x_{00}, x_{01}, x_{02}, ...x_{0d})$, $0$ showing that this is the first sample of training set. In the `backpropagation` process, we have the same architecture. Then by calculating the gradient of each node, we update each weight. this process is done so many times to find the most optimal weights. there are various approaches for this weight updating thing like batch, etc updates.
120104
1
120166
null
0
41
I am able to get convoluted values from RGB Image lets say for each channel. So I have red channel with values: `-100,8,96,1056,-632,2,3....` Now what I do is that I normalize this values in range `0-255` because that is range of rgb values basically with code `Math.Max(minValue, Math.Min(maxValue, currentValue))`. But I don't understand how relu works in CNN. From my point of view I am alredy doing it in my normalization right? Because ReLu layer is from 0 to x where max value of x is 255. So my question is where should be applied ReLu in RGB image after convolution? (After convolution and normalization values are alredy in range 0-255)
ReLu layer in CNN (RGB Image)
CC BY-SA 4.0
null
2023-03-10T18:49:57.133
2023-03-13T20:58:37.503
null
null
147811
[ "machine-learning", "deep-learning", "cnn", "image-preprocessing" ]
You should put ReLU as the activation of the convolution layers. ReLU is not applied to the RGB values, but to the matrix obtained by convolving the image, also called the filter.
The mix of leaky Relu at the first layers of CNN along with conventional Relu for object detection
> how we end up with a model which uses a leak at the first layers and then a conventional RELU at the end? What matters is to add a non-linearity to the outputs of a neuron. Consequently, by employing each function that adds non-linearity, the network will work due to the derivative that can backpropagate the differentiation of the error term. If you use Relu or leaky Relu, the update terms will change but your network will work fine. The reason that leaky version is used in the mentioned papers is due to avoiding dying relu problem which can happen a lot for regression problems. About your second question, consider that it usually suffice to use leaky relu in the first layers and due to them, the chance of deep neurons to be stuck at zero is not very much as the results of those papers show. You can use leaky version all over the network but by experience, relu is very fast to be trained!
120105
1
120107
null
2
67
When dealing with classification for multiple classes present in the same sample, can the output layer have the form of one-hot encoding, but instead of only one hot, have multiple? That is, in case of a only single class being present in the sample, the encoding could be [0,1,0]. For multiple classes in the same sample, can it be [0,1,1]? And if yes, what loss function should be used?
Multiple classes present in one-hot encoding
CC BY-SA 4.0
null
2023-03-10T19:29:42.053
2023-03-13T06:20:05.677
2023-03-13T06:20:05.677
38353
147813
[ "neural-network", "multiclass-classification", "multilabel-classification", "one-hot-encoding" ]
Yes, it is possible to do it exactly as you describe it. This is called multilabel-classification. If doing so, you would treat each element of the output as an independent prediction of a binary classification problem, i.e. the first element would predict, if the sample belongs to class 1, the second element would predict if the sample belongs to class 2 and so on. As a loss function, you could take the sum of the cross-entropy loss per output element. If you have $m$ classes, $y=(y_1,\ldots, y_m)$ would be the binary target and $\hat{y}=(\hat{y}_1,\ldots,\hat{y}_m)$ would be your prediction, then the loss could be $$\mathcal{L}=\sum_{j=1}^m -y_j\log \hat{y}_j - (1-y_j)\log (1-\hat{y}_j)$$
One-hot encoding
Yes you turn it into three different variables, however this is not called multivariate regression, that indicates multiple output variables, not inputs. (Thanks to Simon for correcting me)
120157
1
120158
null
0
20
Given an array `x = [1, 2, 3, ...] `, I want to split each sample `x[i]` into 2 neurons. My idea was to initialize a variable `split = np.random.rand()` with $0 < split < 1$ and then set the neurons `x1 = split * x[i], x2 = (1 - split) * x[i]`. Is my idea correct?
How to split a single feature vector into a layer of 2 neurons
CC BY-SA 4.0
null
2023-03-13T16:54:02.517
2023-03-13T17:09:23.613
null
null
147868
[ "machine-learning", "python", "neural-network" ]
Your idea sounds good to me. By initializing the split variable with a random number between 0 and 1, and then splitting each sample into two neurons using that split value, you'll be randomly allocating the feature of each sample between two neurons. This can help your network to learn more complex and diverse representations of the data. However, keep in mind that splitting the samples in this way might not always be the best approach depending on your specific problem and the type of data you're working with. Overall, I think it's a good idea and worth trying out to see how it works for your particular use case.
Combining 2 Neural Networks
The paper explicitly states the following lines: > Our work is one more point on a significant trendline started with Xception and MobileNets, that indicates that in any convolutional model, whether for 1D or 2D data, it is possible to replace convolutions with depthwise separable convolutions and obtain a model that is simultaneously cheaper to run, smaller, and performs a few percentage points better. A Convolutional neural network forms a chain of differentiable feature learning modules, structured as a discrete set of units, each trained to learn a particular feature. If trained and reused, these could be extended to be used for cosine similarity findings. Depthwise separable convolutions define individual feature paths and so it would be easy for this case where you have concatenated the outputs.
120163
1
120325
null
0
36
I want to create a 3 layers neural network from scratch to perform linear regression. The first and the second layer have 2 neurons, and the last layer has one neuron. Feature vector x is divided into $x_{1}, x_{2}$ where $x_{1} = ax, x_2 = (1-a)x, 0 < a < 1$ Hence, there are 6 weights: $w_{111}, w_{121}, w_{211}, w_{221}, w_{112}, w_{212}$. To note that I'm not including biases since they aren't the object of the question and the activation function is linear, so I'm not including that as well. Reading [the answer to this question](https://datascience.stackexchange.com/questions/117281/does-derivative-of-an-activation-function-refer-to-process-of-back-propogation-i), I'm computing the gradient of each weight in this way: ($w_{111}$ and $w_{121}$ are took as examples) $\frac{∂C}{w_{111}} = \frac{1}{m} \sum_{i=1}^m \frac{∂L_{i}}{w_{111}}$, $\frac{∂L_{i}}{w_{111}} = (y-y')\frac{∂y'}{∂w_{111}}$, $\frac{∂y'}{∂w_{111}} = \frac{∂z_{a1}}{∂w_{111}}$, $\frac{∂z_{a1}}{∂w_{111}} = \frac{∂}{∂w_{111}}(w_{111}x_{1} + w_{121}x_{2}) = x_{1}$ $\frac{∂C}{w_{111}} = \frac{1}{m} \sum_{i=1}^m (y-y')x_{i1}$ $\frac{∂C}{w_{112}} = \frac{1}{m} \sum_{i=1}^m (y-y')x_{i2}$ But, since $z_{a1} = w_{111}x_{1} + w_{121}x_{2}$, and $z_{a2} = w_{211}x_{1} + w_{221}x_{2}$, isn't $\frac{∂C}{w_{111}} = \frac{1}{m} \sum_{i=1}^m (y-y')x_{i1} = \frac{∂C}{w_{211}}$, and so, isn't $w_{111} = w_{211}$, and $w_{121} = w_{221}$?
Are some weight gradients equal?
CC BY-SA 4.0
null
2023-03-13T19:18:17.787
2023-03-18T20:46:56.903
2023-03-14T06:33:43.197
147868
147868
[ "machine-learning", "python", "backpropagation", "derivation" ]
The problem was that $\frac{∂y'}{∂w_{111}} \neq \frac{∂z_{a1}}{∂w_{111}}$, but $\frac{∂y'}{∂w_{111}} = \frac{∂z_{b}}{∂w_{111}}$, where $z_b$ is the sum of the products of all the z-vectors of the last layer with the weights. So, here is an example of a NN with 3 layers, where the first two have 2 neurons and the last one has one neuron: (with no activation function, layer n.1 z-vectors are just $x_i$) - layer n. 2 $z_{11} = w_{111}x_1 + w_{112}x_2$ $z_{12} = w_{121}x_1 + w_{122}x_2$ - layer n. 3 $z_{21} = z_b = w_{211}z_{11} + w_{212}z_{12} $, hence: $z_{21} = w_{211}(w_{111}x_1 + w_{112}x_2) + w_{212}(w_{121}x_1 + w_{122}x_2)$ Now, let's compute the derivatives: - weights of layer n. 2 $\frac{∂z_b}{∂w_{111}} = x_1w_{211}$ $\frac{∂z_b}{∂w_{121}} = x_1w_{212}$ $\frac{∂z_b}{∂w_{112}} = x_2w_{211}$ $\frac{∂z_b}{∂w_{122}} = x_2w_{212}$ - weights of layer n. 3 $\frac{∂z_b}{∂w_{211}} = z_{11}$ $\frac{∂z_b}{∂w_{212}} = z_{12}$ As you can see, there is no weight derivative equal to another one. I hope this can be useful to anyone interested in studying backpropagation.
Conflicting directions of weights gradients in gradient descent?
> That is, maybe Err reduces when w1 goes towards ∂Err/∂w1 alone, but it might well be the case that Err actually increases when w1 is updated along with w2, that is when we are taking steps together in direction of all these weights, we might actually not go down the Err. Isn't this the case? Not exactly. Using back propagation, the weight gradients may be calculated precisely (for the given training data). It doesn't matter that there are many of them updating at the same time. The gradients don't "conflict" as such, they are 100% compatible. But they are only valid locally to the current values of $w_i$ An update to the all the weights at once in the opposite direction to the gradient is guaranteed to reduce the error value for that training data, with an important caveat that it is only fully guaranteed to make an infinitesimal small improvement, when the step size is also infinitesimal. If you make an update step that is too large (for some value of "too large" which varies a lot depending on context), then the curve may not remain consistent over the step and your error value could increase. This problem is not directly related to updating multiple weights at once. It can occur even when you have a single weight. However, when there is a more complex function, with more weights all changing at once, there can be more places where the function does this. In practice, infinitesimal updates would take too long to train, so you need to find a larger value - but not so large it causes problems - as a compromise. In addition: - The training data you have will usually allow you at best to create a rough approximation to the "true" function you are searching for. So you don't really want to find an actual global minimum in the error function for your training data set, as it would overfit. - Typically, using mini-batches, the gradient is also only a rough approximation, so updates only go roughly in the right direction (this can sometimes be good as it can help escape from local minima and saddle points)
120202
1
120968
null
0
37
I'm taking a data analysis course and decided to work on a customer analysis project. In the data, I have three countries: - USA (539 unique users) - BRA(385 unique users) - TUR (129 unique users) I'm trying to analyze the country that brings in the most income, so I've decided to look at the mean revenue for each country. However when I do that I get the following result: [](https://i.stack.imgur.com/F7b18.png) It's not possible for Turkey to generate the most mean, because it has the lowest number of users and the lowest sum. I think it's showing the highest mean because the denominator when calculating the average is small. How would you go about solving this issue? Should I randomly sample data from each country and then calculate the mean? I'd really appreciate any pointers you could give or direction on how this would be solved in a real scenario. Thank you in advance! :)
Customer Analysis - How to handle unbalanced data?
CC BY-SA 4.0
null
2023-03-14T20:48:19.333
2023-04-16T22:32:32.177
null
null
83395
[ "python", "data-analysis", "market-basket-analysis" ]
You can try Stratified Sampling. You can try SMOTE analysis You can try methods like 10 fold cross validation if you want train the ML model You can also use bagging, boosting or random forest algorithms.
Handle Unbalanced data
You have not specified that what neural network you are using but as comments, you should try to fit your data first. You have to try to find a model that learns your training data. For this purpose you don't have to increase the number of data, at least not at this stage. You should try to find a good model which suits your data. For this purpose you have to change the hyper-parameters of your neural network, e.g. number of layers or number of neurons in layers. You can take a look at [here](https://datascience.stackexchange.com/a/26642/28175) and [here](https://datascience.stackexchange.com/a/26291/28175) which the former can help you and the latter helps you understand the features learned by `CNNs` in case you are using them. For using `F1` score in `Keras` I've not seen but you can implement it and pass it to compile method, take a look at [here](https://stackoverflow.com/q/45411902/5120235).
120210
1
120213
null
1
27
I have a basic sequential neural network built with TensorFlow. ``` model = tf.keras.Sequential([ Dense(16, activation='relu', input_shape=(X_train.shape[1],), kernel_regularizer=l1_l2(0.001, 0.001)), Dropout(0.3), BatchNormalization(), Dense(64, activation='relu', kernel_regularizer=l1_l2(0.001, 0.001)), Dropout(0.3), BatchNormalization(), Dense(64, activation='relu'), Dense(1, activation='sigmoid') ]) model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['binary_accuracy', 'AUC']) ``` I train on 12,000 samples with are split evenly. 6000 are category == 1 and 6000 are category == 0. Currently my network treats each category equally. It is equally likely to correctly/wrongly categorise both categories (about 92% and 93% accuracy). However, in my application I need category 1 to be correctly identified >99% of the time. category 0 accuracy can be reduced as low as 84% to achieve this. How could I go about doing this?
How to bias a neural network towards one category in binary classification?
CC BY-SA 4.0
null
2023-03-15T09:43:38.723
2023-03-15T10:32:30.023
null
null
147960
[ "neural-network", "tensorflow", "binary-classification" ]
Adjusting your classification threshold is probably the easiest solution to try first. By default, you're likely classifying probabilities above 0.5 as class 1. Lower threshold would mean more entries classified as 1 at the expense of more false positives. First of all, you should define what you mean by category accuracy. Is that recall (the share of true positives among actual positives) or precision (the share of true positives among predicted positives) or something else entirely? Then, you can apply instruments such as sklearn precision/recall curve to select your optimal threshold.
How to update bias in CNN?
Like the update rule for bias terms in dense layers, in convolutional nets the bias gradient is calculated using the sum of derivatives of Z terms: $$ dJ / db = \sum_h \sum_w dZ_{hw} $$ which J is the cost function, w is the width of the activation after convolution and h is the height of the activation after convolution. db is computed by summing dZs. It means you are summing over all the gradients of the conv output (Z) with respect to the cost. Calculating the error of the net depends on the cost function that you have used. Depending on using cross entropy or mean-squared-error or other justified cost functions, you may have different update rule for other parameters of your net. But if you use cross entropy which is common for variants of classification tasks, the above update rule is used for updating bias terms.
120220
1
120223
null
1
32
Here is the example : ``` from ignite.metrics.nlp import Bleu from nltk.translate.bleu_score import sentence_bleu from torchmetrics.text.bleu import BLEUScore references = [['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog']] candidate = ['the', 'quick', 'brown', 'fox', 'jumped', 'over', 'the', 'lazy', 'dog','and','the','cat'] #using nltk score = sentence_bleu(references, candidate) print(score) #using torch_metrics bleu = BLEUScore() print(float(bleu(candidate,reference))) #using ignite bleu = Bleu() bleu.reset() bleu.update((candidate,references)) print(float(bleu.compute())) # 0.7102992180127422 # 0.0 # 0.0 ``` with the tested version : ``` import ignite print(ignite.__version__) import torchmetrics print(torchmetrics.__version__) import nltk print(nltk.__version__) #0.4.11 #0.11.4 #3.8.1 ``` What am I missing? The dynamic of values on nltk seems better than those of the torchmetrics and ignite frameworks ? can we obtain similar values, with a tweak of the respective parameters of each function ? Thank you for your time.
Why does BLEU score for ignite, torchmetrics and nltk differs?
CC BY-SA 4.0
null
2023-03-15T14:13:17.600
2023-03-15T15:19:20.907
null
null
147968
[ "deep-learning", "pytorch", "metric", "nltk" ]
[Torch metrics expects](https://torchmetrics.readthedocs.io/en/stable/text/bleu_score.html) untokenized sentences: ``` #using torch_metrics references = [['the quick brown fox jumped over the lazy dog']] candidate = ['the quick brown fox jumped over the lazy dog and the cat'] torchmetrics_bleu = BLEUScore() print(float(torchmetrics_bleu(candidate, references))) # 0.7102992534637451 ``` For ignite you are not providing the expected types of inputs, because [update](https://pytorch.org/ignite/generated/ignite.metrics.Bleu.html#ignite.metrics.Bleu.update) expect one more list nesting level: ``` # using ignite ignite_bleu = Bleu() ignite_bleu.reset() ignite_bleu.update(([candidate], [references])) print(float(ignite_bleu.compute())) # 0.7102992255729442 ```
BLEU_SCORE gives bad scores - what am I doing wrong?
You must be getting a warning with this output. Warning pretty much tells you the reason why the scores are 0. Because there ARE NO 2-gram , 3 -grams in you example which are overlapping. [](https://i.stack.imgur.com/l8zoj.png) Here is the detailed explanation, I couldn't explain it better- [https://github.com/nltk/nltk/issues/1838](https://github.com/nltk/nltk/issues/1838) EDIT- Solution- Although the warning tells the reason, here is how you can fix this- > Notice the ref and pre, ``` from nltk.translate.bleu_score import corpus_bleu ref =[[['electron', 'and', 'a', 'proton']]] pre =[['electron', 'and', 'a', 'proton']] print("Individual n-gram") print("Individual 1-gram") print('BLEU-1: %f' % corpus_bleu(ref, pre, weights=(1.0, 0, 0, 0))) print("Individual 2-gram") print('BLEU-2: %f' % corpus_bleu(ref, pre, weights=(0, 1.0, 0, 0))) print("Individual 3-gram") print('BLEU-3: %f' % corpus_bleu(ref, pre, weights=(0, 0, 1.0, 0))) print("Individual 4-gram") print('BLEU-4: %f' % corpus_bleu(ref, pre, weights=(0, 0, 0, 1.0))) ``` [](https://i.stack.imgur.com/aTBmG.png) You can refer `help` of python- [](https://i.stack.imgur.com/BC4Sp.png)
120224
1
120324
null
1
118
I need a way to map strings to a numeric space, where the mapping moves similar strings to the same number. For example: in ``` str1 = 'some random string one' str2 = 'some rzndom string one' str3 = 'some rndom string one' str4 = 'a very different string' ``` we would want [`str1`, `str2`, `str3`] all to be mapped to the same number, while `str4` be mapped to a relatively distant number We can assume all the strings are lowercase and have no punctuation. ## What I've tried So far, I've come up with using the sum of the ascii values and rounding down by some factor: ``` class StringMapper(TestCase): def translate_string_to_numeric_space(self, strr: str, granularity: int) -> int: list_of_char_vals = [ord(c) for c in strr] sum_of_chars = sum(list_of_char_vals) granularity_reduced_sum = self.round_down_to_nearest_multiple_of(sum_of_chars, granularity) return granularity_reduced_sum @staticmethod def round_down_to_nearest_multiple_of(num: int, multiple: int) -> int: return num - (num % multiple) ``` Then we get: ``` vals = [self.translate_string_to_numeric_space(s, 100) for s in [str1, str2, str3, str4]] ``` > [2100, 2100, 2000, 2200] #### What works in this solution: - it allowed a single typo ('a' -> 'z' in str1, str2) #### What didn't work in this solution: - it did not allow for omissions (str3 was not grouped with str1 and str2) - it did not create sufficient distance between str4 and the others Welcoming any ideas, thank you!! --- Clarification String-to-string comparison such as jaro-winkler or levenstein is not an option, as we have a very large number of strings, and pair-wise comparison squares the number of operations
mapping similar strings to same number values
CC BY-SA 4.0
null
2023-03-15T15:25:25.080
2023-03-28T15:37:06.387
2023-03-17T21:10:20.240
119043
119043
[ "machine-learning" ]
It might be useful to frame this problem using common terminology. Hashing is mapping an object, a string in this case, to an integer. What you want are collisions (i.e., similar objects are mapped to the same integer bucket). The goal is to pick a hashing function that does that based on the number of shared letters in the string. A scalable implementation of this idea is MinHash and locality-sensitive hashing (LSH). Here is a rough version using Python's datasketch library: ``` from datasketch import MinHash, MinHashLSH str1 = 'some random string one' str2 = 'some rzndom string one' str3 = 'some rndom string one' str4 = 'a very different string' strings = [str1, str2, str3, str4] # Hash each string, letter-by-letter hashes = [] for s in strings: m = MinHash(num_perm=128) for c in s: m.update(c.encode('utf8')) hashes.append(m) # Create LSH storage lsh = MinHashLSH(threshold=0.8, num_perm=128) for n, hash_value in enumerate(hashes, 1): lsh.insert(f"str{n}", hash_value) # Test that the queries for the hash values return expected neighbors hash_str1 = hashes[0] assert set(lsh.query(hash_str1)) == set(['str1', 'str2', 'str3']) hash_str4 = hashes[3] assert set(lsh.query(hash_str4)) == set(['str4']) ```
How to encode a column containing both string and numbers
Encoding is a way to transform categories to numerical variables, there are a lot of techniques. The best technique depends on what is the information you want to encode and what is the model that you are going to use. Some models benefit more of one techniques than others. You should ask yourself the following questions to try to find the best solution: - Why do you have numerical and categorical values in a column? - Does it make sense to have them? - Which model am I using? - What is the best way to feed them to this model? From the bit that I get I would reccomend that either you do target encoding with everything or that you split that feature into two. And then you do target encoding in the categorical and even in the numerical. Just check which works better.
120262
1
120273
null
0
55
I am developing binary classification models to predict a medical condition in my dataset. My results show that both Logistic Regression and Linear SVM consistently outperformed other ML algorithms (SVM, NB, MLP and DT), as can be seen in the following screenshot: [](https://i.stack.imgur.com/8VB9T.png) Observing recent research, I found multiple studies and reviews that talk about the phenomenon of machine learning not being superior to logistic regression for clinical prediction models, such as this systematic review of 71 studies: [https://pubmed.ncbi.nlm.nih.gov/30763612/](https://pubmed.ncbi.nlm.nih.gov/30763612/). I would like to understand what it means for LR to outperform other more complex ML algorithms? Does it just indicate the my classes are linearly separable?
Why does Logistic Regression perform better than machine learning models in clinical prediction studies
CC BY-SA 4.0
null
2023-03-16T12:22:48.740
2023-03-22T15:43:08.040
2023-03-22T15:43:08.040
139065
99648
[ "machine-learning", "logistic-regression", "binary-classification", "linearly-separable" ]
Clinical trial data is typically collected from a sample population and often has a limited size and number of features. Complex models applied to such data are more likely to overfit, whereas simpler models like LR are less prone to overfitting and can generalize better. Furthermore, the features and the outcome variable may exhibit a predominantly linear relationship, making LR a suitable choice for these types of datasets. The underlying linearity assumption between the features and the outcome variable also contributes to LR's robustness to noise and outliers, which is another factor behind its better performance. On the other hand, SVM is sensitive to hyperparameter selection. Tuning SVM can be particularly challenging when dealing with small datasets, such as clinical data. It is worth noting that SVM can model non-linear relationships using kernel functions (non-linear functions); however, this capability does not guarantee improved performance if the data's underlying relationship is inherently linear. In some cases, a linear kernel might perform well, but this is not a certainty. It is important to consider various aspects before drawing any concrete conclusions. Performance may vary depending on the specific problem, the nature of the datasets, and of course the choice of hyperparameters.
Algorithm selection rationale (Random Forest vs Logistic Regression vs SVM)
I suppose I will suggest as a starting point and expand on what you suggested by just adding the following - Knowing the type of data you are working with and it's characteristics, (categorical, supervised/unsupervised, data size etc.). - Knowing what accuracy requirements you need, timeframe and computational power you have at your disposal vs accuracy and really answering "why, am I trying to solve this problem?" After answering these questions you can at least narrow down slightly what you may use (and eliminate those you clearly don't believe fit). After that, I suppose it's trial and error, experience and comparing to others who dealt with similar datasets and problems. I have this crude flow chart I found in my favourites from the scikitlearn website. Not sure where I found it to be honest. Take it for what you will, hopefully it helps somewhat: [https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html](https://scikit-learn.org/stable/tutorial/machine_learning_map/index.html)
120263
1
120380
null
1
52
I am training a MLP on a tabular dataset, the pendigits dataset. Problem is that training loss and accuracy are more or less stable, while validation and test loss and accuracy are completely constant. The pendigits dataset contains 10 classes. My code is exactly the same with other experiments that I did for example on MNIST or CIFAR10 that work correctly. The only things that change are the dataset from MNIST/CIFAR10 to pendigits and the NN, from a ResNet-18 to a simple MLP. Below the training function and the network: ``` def train(net, loaders, optimizer, criterion, epochs=100, dev=dev, save_param = True, model_name="only-pendigits"): torch.manual_seed(myseed) try: net = net.to(dev) print(net) # Initialize history history_loss = {"train": [], "val": [], "test": []} history_accuracy = {"train": [], "val": [], "test": []} # Process each epoch for epoch in range(epochs): # Initialize epoch variables sum_loss = {"train": 0, "val": 0, "test": 0} sum_accuracy = {"train": 0, "val": 0, "test": 0} # Process each split for split in ["train", "val", "test"]: # Process each batch for (input, labels) in loaders[split]: # Move to CUDA input = input.to(dev) labels = labels.to(dev) # Reset gradients optimizer.zero_grad() # Compute output pred = net(input) #labels = labels.long() loss = criterion(pred, labels) # Update loss sum_loss[split] += loss.item() # Check parameter update if split == "train": # Compute gradients loss.backward() # Optimize optimizer.step() # Compute accuracy _,pred_labels = pred.max(1) batch_accuracy = (pred_labels == labels).sum().item()/input.size(0) # Update accuracy sum_accuracy[split] += batch_accuracy scheduler.step() # Compute epoch loss/accuracy epoch_loss = {split: sum_loss[split]/len(loaders[split]) for split in ["train", "val", "test"]} epoch_accuracy = {split: sum_accuracy[split]/len(loaders[split]) for split in ["train", "val", "test"]} # Update history for split in ["train", "val", "test"]: history_loss[split].append(epoch_loss[split]) history_accuracy[split].append(epoch_accuracy[split]) # Print info print(f"Epoch {epoch+1}:", f"TrL={epoch_loss['train']:.4f},", f"TrA={epoch_accuracy['train']:.4f},", f"VL={epoch_loss['val']:.4f},", f"VA={epoch_accuracy['val']:.4f},", f"TeL={epoch_loss['test']:.4f},", f"TeA={epoch_accuracy['test']:.4f},", f"LR={optimizer.param_groups[0]['lr']:.5f},") except KeyboardInterrupt: print("Interrupted") finally: # Plot loss plt.title("Loss") for split in ["train", "val", "test"]: plt.plot(history_loss[split], label=split) plt.legend() plt.show() # Plot accuracy plt.title("Accuracy") for split in ["train", "val", "test"]: plt.plot(history_accuracy[split], label=split) plt.legend() plt.show() ``` Network: ``` #TEXT NETWORK class TextNN(nn.Module): #Constructor def __init__(self): # Call parent contructor super().__init__() torch.manual_seed(myseed) self.relu = nn.ReLU() self.linear1 = nn.Linear(16, 128) #16 are the columns in input self.linear2 = nn.Linear(128, 128) self.linear3 = nn.Linear(128, 32) self.linear4 = nn.Linear(32, 10) def forward(self, tab): tab = self.linear1(tab) tab = self.relu(tab) tab = self.linear2(tab) tab = self.relu(tab) tab = self.linear3(tab) tab = self.relu(tab) tab = self.linear4(tab) return tab model = TextNN() print(model) ``` Is it possible that the model is too simple that it does not learn anything? I do not think so. I think that there is some error in training (but the function is exactly the same with the function I use for MNIST or CIFAR10 that works correctly), or in the data loading. Below is how I load the dataset: ``` pentrain = pd.read_csv("pendigits.tr.csv") pentest = pd.read_csv("pendigits.te.csv") class TextDataset(Dataset): """Tabular and Image dataset.""" def __init__(self, excel_file, transform=None): self.excel_file = excel_file #self.tabular = pd.read_csv(excel_file) self.tabular = excel_file self.transform = transform def __len__(self): return len(self.tabular) def __getitem__(self, idx): if torch.is_tensor(idx): idx = idx.tolist() tabular = self.tabular.iloc[idx, 0:] y = tabular["class"] tabular = tabular[['input1', 'input2', 'input3', 'input4', 'input5', 'input6', 'input7', 'input8', 'input9', 'input10', 'input11', 'input12', 'input13', 'input14', 'input15', 'input16']] tabular = tabular.tolist() tabular = torch.FloatTensor(tabular) if self.transform: tabular = self.transform(tabular) return tabular, y penditrain = TextDataset(excel_file=pentrain, transform=None) train_size = int(0.80 * len(penditrain)) val_size = int((len(penditrain) - train_size)) pentrain, penval = random_split(penditrain, (train_size, val_size)) pentest = TextDataset(excel_file=pentest, transform=None) ``` All is loaded correctly, indeed if I print an example: ``` text_x, label_x = pentrain[0] print(text_x.shape, label_x) text_x torch.Size([16]) 1 tensor([ 48., 74., 88., 95., 100., 100., 78., 75., 66., 49., 64., 23., 32., 0., 0., 1.]) ``` And these are my dataloaders: ``` #Define generators generator=torch.Generator() generator.manual_seed(myseed) # Define loaders from torch.utils.data import DataLoader train_loader = DataLoader(pentrain, batch_size=128, num_workers=2, drop_last=True, shuffle=True, generator=generator) val_loader = DataLoader(penval, batch_size=128, num_workers=2, drop_last=False, shuffle=False, generator=generator) test_loader = DataLoader(pentest, batch_size=128, num_workers=2, drop_last=False, shuffle=False, generator=generator) ``` I have been stuck with this problem for 2 days, and I do not know what the problem is... EDIT: Basically, if I write `print(list(net.parameters()))` at the beginning of each epoch, I see that weights does never change, and for this reason loss and accuracy remain constant. Why weights are not changing??
Neural network not learning at all
CC BY-SA 4.0
null
2023-03-16T13:53:33.503
2023-03-20T21:02:10.667
2023-03-17T14:47:39.203
109836
109836
[ "pytorch", "loss-function", "accuracy", "mlp", "csv" ]
I solved... mistake was that I was calling again `model = TextNN()` after instantiating the optimizer, so weights were not changing... So, every part was ok, apart from the optimizer that was working with another (unused) model.
Why is my neural network not learning?
A neural network is the wrong approach for a problem with a small training set. Even if you only have 2 features that are very representative of your function then 16 feature are not sufficient. As a very general rule of thumb I use 100 examples for each feature in my dataset. This then increases exponentially with every single different class you expect. 16 instances is not enough to train a neural network. You will always have huge error margins when applying your model on a testing set. Even more problematic is the fact that you are using a very deep neural network. This will require even more training instances to properly learn the function. I suggest you use a general machine learning technique such as SVM. This will likely result in better result. Try these techniques instead and see what results you get: k-NN, kernel SVM, k-means clustering. But, be warned 16 training instances is still very little.
120267
1
120268
null
0
15
# Background I've created a binary classification model that predicts the probability of fraud for a given sample. The choice of threshold allows me to set how many frauds are captured in the training dataset. However, when I test the model on a similar dataset that was collected a year later, the threshold must be lowered to capture the same number of samples. # Issue This implies that my model is under-predicting the probability of fraud in the new dataset. This implies that the model's predictions are drifting towards 0 and that the distribution is decreasing and narrowing. # Question Is this a normal occurrence in models that are built on complex social systems?
Do models of social systems suffer from prediction drift?
CC BY-SA 4.0
null
2023-03-16T14:46:45.117
2023-03-16T15:47:08.140
2023-03-16T14:52:29.600
138508
138508
[ "machine-learning", "classification", "predictive-modeling", "prediction", "probability" ]
I wouldn't call it normal but it surely is possible. There are several reasons: - Changes in fraud patterns: If the model was trained on historical data that is no longer representative of current fraud patterns, it may not be able to detect the latest types of fraud. - Drift: data drift occurs when the statistical properties of the data being analyzed change over time. For example, if the demographics of the population being analyzed change, or if new types of transactions are introduced, the model may not be able to detect fraud as effectively. - Model Decay: The model can lose efficiency over time due to changes in the environment. It is similar to the drift. You have to retrain or refit the model using new data periodically to avoid this. - The fraudsters have hacked you, they now know how the model works and are avoiding detection :) It is most likely a normal occurrence. I suggest refitting.
Is it possible to forecast the evolution of cars?
This can be treated as Multivariate Time Series Forecasting, for this you can look into Vector AutoRegression(VAR). Because there might be some association between attributes which we need to take care of hence we can't treat them as separate time series entities. If you have found some hierarchical relationship between variable you can also look into hierarchical time series forecasting.
120272
1
120280
null
4
289
I've created a model that has recently started suffering from drift. I believe the drift is due to changes in the dataset but I don't know how to show that quantitatively. What techniques are typically used to analyze and explain model (data) drift? Extra: The data is Tabular.
What techniques are used to analyze data drift?
CC BY-SA 4.0
null
2023-03-16T23:38:04.340
2023-05-05T07:32:06.693
2023-03-17T07:47:14.513
138508
138508
[ "dataset", "machine-learning-model", "data-science-model", "concept-drift", "data-drift" ]
It depends about what type of data are we talking: tabular, image, text... This is part of my PhD, so I am completely biased, I will suggest Explanation Shift. (I would love some feedback). It works well on tabular data. - Package: skshift https://skshift.readthedocs.io/ - Paper: https://arxiv.org/pdf/2303.08081.pdf In the related work section one can find other approaches. The main idea under "Explanation Shift" is seeing how does distribution shift impact the model behaviour. By this we compare how the explanations (Shapley values) look on test set and on the supposed Out-Of-Distribution data. The issue is that in the absence of the label of OOD data, (y_ood) one can not estimate the performance of the model. There is the need to either provide some samples of y_ood, or to characterize the type of shift. Since you can't calculate peformance metrics the second best is to understand how the model has changed. There is a well known library Alibi [https://github.com/SeldonIO/alibi-detect](https://github.com/SeldonIO/alibi-detect) That has other methods :)
How to Combat Data Drift
As you suggest, that situation could end up your monitoring system indicating a data drift. To evaluate this scenario, let's classify some types of data drift we could have: - features drift: given when the distribution of the input features (comparing training datasets VS prediction datasets) change enough (with a defined threshold) to raise an alert - target drift: distribution of the label values change when comparing training VS prediction distributions - concept drit: when the relation between the input features and target values change; it can arise when the label is redefined (for instance, the business rules for deciding clients who are active or inactive with some products; if the labeling rules changes, the same input feature values could be assigned to different target values before VS after redefining). The way to monitor these drifts can be carried out via hypothesis testing as you say (e.g. [Kolmogor-Smirnov test](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html), [Population Stability Index](https://www.listendata.com/2015/05/population-stability-index.html), etc), where you define the degree of warning thresholds. The point is: what is the goal of having this drift monitoring? In general, the advice is to retraining your models when this type of drifts occur: it might or might not improve your model performance, but you make sure to have a model updated with fresh data. Other goal is also to have knowledge about your update data statistics of course. Nevertheless, in this scenario of a subset of clients ages, although the model was trained with more "complete" datasets, you are making inference on a subset of the whole population used for training the model, so your model could still be valid enough (unless this new sceneario becomes the ususal one, so a more custom model could be trained with this new kind of more specific data).
120289
1
120297
null
0
52
I am currently working on a NLP model that compares two comments and determines which one would be more popular. I have already came up with an architecture - it will be based on GPT-2. But now I am struggling understanding what is the general format of an output of it. I inspected [this](https://github.com/graykode/gpt-2-Pytorch/blob/master/GPT2/model.py) PyTorch implementation of GPT-2 and here is what I understood: - GPT2Model is the main transformer block, which uses stack of decoders (class Block). - Block is just one decoder block with attention and convolution layers - GPT2LMHead is just some number of fully-connected layers. Simple classification head. What I don't understand so far is: - What is presents variable for? I looked inside and it is just list of tensors, but I can't really figure out what are they. - If I want to get an embedding of my input sentence, which class I need to use? I thought it is GPT2Model that returns some hidden states, but it returns matrix with dimensions (batch_size, sentence_length + smth, 768). Why is it a matrix and how to get vector then? - What is the purpose of set_embedding_weights method? To be honest, I don't even understand what embedding weights really are. - If I want to my output be of fixed shape, what placeholders do I need to use in case when an input sentence is smaller than max input size of the GPT-2 model? Please, can you help me to understand this? I would appreciate any help. Thank you in advance!
GPT-2 architecture question
CC BY-SA 4.0
null
2023-03-17T13:29:09.533
2023-03-17T15:41:23.483
null
null
148029
[ "machine-learning", "neural-network", "nlp", "pytorch", "gpt" ]
- presents are the model hidden states. This return value is used during inference, to avoid recomputing the previous steps' hidden states at every inference step (see sample.py). - Autoregressive language models do not generate sentence representations, just predict the next token. Typically, it's been masked language models (like BERT) the ones that generated a sentence-level representation, usually training it over some sentence-level task (e.g. next sentence prediction in BERT). Therefore, GPT-2 is simply not meant for what you want. - set_embedding_weights is a method of the GPT2LMHead to set the fully connected matrix to be equal to the embedding matrix. This way, you reuse the same parameters for the embedding matrix and for the logit projection matrix. This is a frequent approach to regularize the model (i.e. avoid overfitting reducing the total number of parameters). I believe this is the piece of research that introduced such an approach. - As I mentioned in point (2), GPT-2 is not meant for this. One option would be to average the output representations for all tokens. Another option would be to use a model that is specifically designed to obtain sentence-level representations, like BERT.
Can I fine tune GPT-3?
The weights of GPT-3 are not public. You can fine-tune it but only through the interface provided by OpenAI. In any case, GPT-3 is too large to be trained on CPU. About other similar models, like GPT-J, they would not fit on a RTX 3080, because it has 10/12Gb of memory and GPT-J takes 22+ Gb for float32 parameters. It should possible to fine-tune some special versions that use int8 precision, like [this one](https://github.com/huggingface/transformers/issues/14839).
120301
1
120305
null
1
228
In natural language processing, the cosine similarity is often used to compute the similarity between two words. It is bounded between [-1, 1]. Supposedly, 1 means complete similarity, -1 means something like antonyms, and 0 means no relationship between the words, although I am unsure whether that fully holds true in praxis. For another application, I need to convert the cosine similarity to a probability between 0 and 1. A straightforward solution would be to take the absolute value of the cosine similarity, but does this make sense? My goal is simply to assign higher scores to words that occur in similar contexts (i.e. could be swapped out and still leave the sentence plausible).
Convert cosine similarity to probability
CC BY-SA 4.0
null
2023-03-17T16:33:17.997
2023-03-17T21:25:03.720
2023-03-17T19:01:07.117
141192
141192
[ "nlp", "cosine-distance" ]
You can try to normalize the similarity: `norm_sim = (sim + 1) / 2`, where sim is the cosine. In this case, 0 means opposite similarity, 0.5 means no relationship between words and 1 means complete similarity. In my opinion, taking the absolute value of the cosine wouldn't have much sense because a word that has complete similarity with another one gets the same value as its antonym. [source](https://stackoverflow.com/questions/56316903/cosine-similarity-between-0-and-1) If you want to assign an higher score to antonyms than to unrelated words, you could try assigning weights to the cosine values < 0 that reduce more the value as it approaches 0. For example: ``` def weights(x): if x < 0: return ((0.75*x)+1)/2 else: return (x+1)/2 ``` I know this may be an awful choice of weights, but i think the idea is valid
Threshold determination / prediction for cosine similarity scores
As far as I know there is no satisfactory answer: - One uses a threshold in order to avoid having to choose a specific K in a top K approach. The threshold is often selected manually to eliminate the sentences which are really not relevant. This makes this method more suitable for favouring recall, if you ask me. - Conversely, one uses a "top K" approach in order not to select a threshold. I think K is often selected quite low in order to keep mostly relevant sentences, i.e. it's an approach more suitable for high precision tasks. The choice depends on the task: - First, the approach could be chosen based on what is done with the selected sentences: if it's something like question answering, one wants high precision usually. If it's information retrieval, one wants high recall. If it's a search engine, just rank the sentences by decreasing similarity. - Then for the value itself (K or threshold), the ideal case is to do some hyper-parameter tuning. i.e. testing multiple values and evaluate the results. If this is convenient or doable for the task, then look at a few examples and manually select a value which looks reasonable.
120313
1
120323
null
1
38
I am very very new to the world of data science as I only started using it in my new job so I would really appreciate help from the community experts (maybe also in simple words :)). I am trying to build a dataset comprising data extracted from a NetCDF data file. The data extract would contain n number of images each of 25x25 size in 17 channels. The idea is to save them as a new data file or object (it could be NetCDF but there is no restriction as long as it is readable by xarray). I am unable to find a way to achieve this because, in xarray, you have N-dimensional data, and to each point in this N-dimensional data there is a label attached. So how do I save 25x25 images with 17 variables (channels) in one dimension (axis of length n, the number of images) so that when I pass the index of the axis (nth image), it returns as dataArray of 17x25x25. Thanks in advance.
Making a netcdf data using xarray
CC BY-SA 4.0
null
2023-03-18T10:20:21.193
2023-03-18T15:37:07.103
null
null
148064
[ "dataset", "data-mining", "data", "data-science-model", "feature-extraction" ]
``` import numpy as np !pip install netCDF4 !pip install xarray import netCDF4 as nc import xarray as xr image0 = np.array([[[np.zeros(25)+1] for i in range(25)] for i in range(17)]).squeeze() image1 = np.array([[[np.zeros(25)+2] for i in range(25)] for i in range(17)]).squeeze() image2 = np.array([[[np.zeros(25)+3] for i in range(25)] for i in range(17)]).squeeze() # creating some arrays with the same size as the images you asked for array = np.array([image0, image1, image2]) # numpy array df = xr.DataArray(array) # netCDF4 xarray # df[0] = image[0], df[1] = image[1], df[2] = image[2] ``` Let me know if you meant other.
Creating neural net for xor function
Yes, there is a reason. It has to do with how you initialize your weights. There are 16 local minimums that have the highest probability of converging between 0.5 - 1. [](https://i.stack.imgur.com/1icYN.png) Here is a paper that analyses the xor problem: [Learning XOR: exploring the space of a classic problem](https://www.maths.stir.ac.uk/%7Ekjt/techreps/pdf/TR148.pdf), Bland 1998.
120346
1
120348
null
0
15
When performing backpropagation with Adam algorithm, are the moment and the second moment of the weight vectors calculated also for the weights in hidden layers?
moments of weight vectors in Adam
CC BY-SA 4.0
null
2023-03-19T21:06:10.837
2023-03-19T21:55:10.003
null
null
147868
[ "machine-learning", "gradient-descent", "backpropagation" ]
It looks like it. The equations in the description of the algorithm in [Hands-on Machine Learning](https://www.oreilly.com/library/view/hands-on-machine-learning/9781492032632/) as well as the [original paper](https://arxiv.org/abs/1412.6980) do not differentiate between parameters (weights) in different layers. Further, the [scikit-learn implementation](https://github.com/scikit-learn/scikit-learn/blob/main/sklearn/neural_network/_stochastic_optimizers.py#L198) of Adam has `ms` and `vs` (first and second moment) vectors equal to the length of the parameters, and update these alongside updates to the weights themselves. [Typically](https://github.com/scikit-learn/scikit-learn/blob/9aaed4987/sklearn/neural_network/_multilayer_perceptron.py#L761), the parameters of a multilayer neural network are [unpacked or unrolled into a single vector](https://www.youtube.com/watch?v=dlEoLfA4MSQ&list=PLLssT5z_DsK-h9vYZkQkYNWcItqhlRJLN&index=53), which is what gets passed into the call to the optimizer function. Since the first and second moments are calculated for all of this vector in the scikit-learn implementation, you are right that the gradients are calculated for the weights in the hidden layers as well. I hope that helps.
When does Adam update its weights?
Adam works in the same way as SGD does in this regard, it updates the weights at the end of each iteration, so at the end of an epoch multiple weight updates have been applied. Inherently neither Adam nor SGD do anything to counteract the noisy labels, they just try to find the best parameters that minimize a loss function. I don't think anyone can answer apriori if it will be better to use Adam or SGD for your problem.
120352
1
120933
null
1
29
I'm new to machine learning and I want to use random forest for the problem I have. What I have done so far is I did the 80/20 split of the original data set. I need to understand what will happen next when building the random forest model. I understand that the next step is taking a random sample (with replacement) from the 80% portion used for training, and use this bootstrapped data set to build a decision tree. Let's assume I have 5 features/columns, and to select a root node, a random subset of the 5 features are selected, and the variable that has the highest `Gini index` is the root.Let us assume feature 2 is the root. Next, a random subset of the remaining 4 features are selected to create child nodes for the root node. Let's assume feature 1 and 5 are selected. My questions are: 1- after obtaining `Gini index` for feature 1 and 5 and let's assume node 1 has the higher index value, how do I know if node 1 should be the left or right node? 2- does node 5 become the left/right node now? or do we selected a random subset of the remaining 3 features (3, 4, 5), find their `Gini index` values and the right child becomes the node with the highest `Gini index`?
how each tree in random forest structured/built?
CC BY-SA 4.0
null
2023-03-20T01:24:43.420
2023-04-15T11:37:15.537
null
null
148111
[ "machine-learning", "random-forest", "supervised-learning" ]
Great questions! I'll do my best to answer them: - Once you have selected feature 2 as the root and features 1 and 5 as the candidates for the first split, you need to determine whether node 1 should be the left or right node. To make this decision, you will split the data based on the values of feature 2. Any data points with a feature 2 value less than or equal to the threshold value for node 1 will be assigned to the left node, and any data points with feature 2 value greater than the threshold value will be assigned to the right node. Once the data is split, you can calculate the Gini index for each child node using the remaining features. - Node 5 is not automatically assigned to the left or right node after node 1 is determined. Instead, you will select a new random subset of the remaining 3 features (3, 4, 5) to create child nodes for node 1. You will repeat the same process as before: Calculate the Gini index for each candidate feature. Select the feature with the highest Gini index as the next node Split the data based on the threshold value of that node. Then, you can calculate the Gini index for each child node using the remaining features. You will continue this process recursively until you reach a stopping criterion, such as a `maximum depth of the tree` or a `minimum number of samples per leaf node`. Each decision tree in the random forest will be trained on a bootstrapped sample of the training data. The final prediction will be the mode (for classification) or mean (for regression) of the predictions from all the trees in the forest.
What kind of decision tree are used in random forest?
In short : It can be any type of tree inside forest :) Random forest is an ensemble of many decision trees.The success of a random forest highly depends on using uncorrelated decision trees. If we use same or very similar trees, overall result will not be much different than the result of a single decision tree. Random forests achieve to have uncorrelated decision trees by bootstrapping and feature randomness. So what type of tree are inside ? This depends on implementation. Generally, any bootstrap-aggregated attribute-bagged learner based on trees (any of them) is called Random Forest. You get different flavors using different trees. For example : Function randomForest() in R uses CART algorithm CART, C4.5 or C5.0 , any of these can be used to grow a forest
120358
1
120363
null
0
168
The original Transformer paper ([Vaswani et al; 2017 NeurIPS](https://arxiv.org/pdf/1706.03762.pdf)) describes the model architecture and the hyperparameters in quite some detail, but it misses to provide the exact (or even rough) model size in terms of parameters (model weights). I could not find a source with a definite answer on this. Table 3 also mentions a `base` and a `big` model, but for none of them model size is given. How many parameters does a `base` or a `big` Transformer model, according to the original implementation by Vaswani et al., have?
How many parameters does the vanilla Transformer have?
CC BY-SA 4.0
null
2023-03-20T09:33:01.243
2023-03-20T12:58:23.317
null
null
43061
[ "machine-learning", "nlp", "transformer", "attention-mechanism" ]
Table 3 has all the values of the hyper-parameters of the models. See the image below, green are for the base and blue for the big model. [](https://i.stack.imgur.com/ozUPr.png) You can use these to get the matrices sizes. For example for the multi-headed attention in section 3.2.2 [](https://i.stack.imgur.com/P54uj.png) the matrix $W^{Q}_{i}\in\mathbb{R}^{d_{model}\times d_{k}}$, will have a dimension of $d_{model}\times d_{k} = 512\times 64$ for the base model, which is $32,768$ parameters. For a single $head_{i}$, that value will be $\times 3 = 98,304$. For the multi-head, that value will be $\times h = \times 8 = 786,432$ parameters for the base model. You can use the other values from Table 3 to figure out the rest of the model matrices. For examples, $d_{ff}$ is used in section 3.3. Table 3 says that the total of all parts of the model should be $65\times 10^6$ for the base model and $213\times 10^6$ for the big model.
Why does vanilla transformer has fixed-length input?
The restriction in the maximum length of the transformer input is due to the needed amount of memory to compute the self-attention over it. The amount of memory needed by the self-attention in the Transformer is quadratic on the length of the input. This means that increasing the maximum length of the input, increases drastically the needed memory for self-attention. The maximum length is that which makes the model use up the whole memory of the GPU for at least one sentence (once the other elements of the model are also taken into account, like the embeddings which take a lot of memory). [Transformer-XL](https://openreview.net/forum?id=HJePno0cYm) is certainly a way to take into account as much context as possible in language modeling (its role is analogous to truncated back-propagation through time in LSTM language models). However, the gradients are not propagated through the attention over the memory segment, only through the current segment. There have been several architectural attempts to reduce the amount of memory needed by transformers, like using [locality-constraints in the attention](https://openreview.net/forum?id=SkVhlh09tX) (Dynamic Convolutions model) or using [locality-sensitive hashing](https://openreview.net/forum?id=rkgNKkHtvB) (Reformer model). There have been other implementation attempts, like gradient checkpointing(e.g. [this](https://qywu.github.io/2019/05/22/explore-gradient-checkpointing.html)), which is a general technique to run computations that don't fit at once in the GPU memory
120374
1
120379
null
1
218
Suppose GPT-2 or GPT-3 is trying to generate the next token, and it has a probability distribution (after applying softmax to some output logits) for the different possible next tokens. How does it choose what token to use in its textual output? The [GPT-2 paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) mentions top-k random sampling (citing "[Hierarchical Neural Story Generation](https://arxiv.org/pdf/1805.04833.pdf)") and never mentions beam search. The [GPT-3 paper](https://arxiv.org/pdf/2005.14165.pdf) mentions nucleus sampling (citing "[The Curious Case of Neural Text Degeneration](https://arxiv.org/pdf/1904.09751.pdf)") and mentions beam search (citing "[Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf)").
How do GPT models go from token probabilities to textual outputs?
CC BY-SA 4.0
null
2023-03-20T16:44:36.223
2023-03-22T09:50:21.613
null
null
126844
[ "machine-learning", "nlp", "gpt" ]
Any language model can generate text with different approaches: - Greedy decoding: you get the highest probability token at each time step. - Sampling: the generated token is sampled from the probability distribution at each time step. - Temperature sampling: the generated token is sampled probability distribution after applying a temperature factor $\alpha$, which can either flatten the distribution or sharpen it. - Beam search: you keep the highest k most probable subsequences (i.e. the "beam"); when you finish decoding them, you output the most probable one. - Top-k sampling: you sample from the probability distribution, but only considering the top k most probable tokens. - Nucleus sampling: you sample from the probability distribution, but only consider the top probability tokens that add up to a specific cumulative probability p. The OpenAI's API allows selecting the following approaches for both the [completions](https://platform.openai.com/docs/api-reference/completions) and the [chat](https://platform.openai.com/docs/api-reference/chat) endpoints: - Temperature sampling, with the temperature parameter. - Nucleus sampling, with the top_p parameter. You can specify both of them, but OpenAI suggests you only use one of them at the same time. If you want to know the specific detail of the implementation of these approaches, you can check [this post](https://huggingface.co/blog/how-to-generate) from the Huggingface blog.
Training Objective of language model for GPT3
This may be best understood with a bit more of context from the article: > A more fundamental limitation of the general approach described in this paper – scaling up any LM-like model, whether autoregressive or bidirectional – is that it may eventually run into (or could already be running into) the limits of the pretraining objective. Our current objective weights every token equally and lacks a notion of what is most important to predict and what is less important. [RRS20] demonstrate benefits of customizing prediction to entities of interest. I think that the relevant part of the reference [[RRS20]](https://www.aclweb.org/anthology/2020.emnlp-main.437.pdf) is this paragraph: > Recently, Guu et al.(2020) found that a “salient span masking” (SSM) pre-training objective produced substantially better results in open-domain question answering. This approach first uses BERT (Devlin et al., 2018) to mine sentences that contain salient spans (named entities and dates) from Wikipedia. The question answering model is then pre-trained to reconstruct masked-out spans from these sentences, which Guu et al. (2020) hypothesize helps the model “focus on problems that require world knowledge”. We experimented with using the same SSM data and objective to continue pretraining the T5 checkpoints for 100,000 additional steps before fine-tuning for question answering. With that context in mind, I understand that the sentence in the GPT-3 papers means that in normal language models, the predictions of every token has the same importance weight toward the computation of the loss, as the individual token losses are added together in an unweighted manner. This as opposed to the salient span masking approach, which finds tokens that are important to predict by means of a BERT-based preprocessing.
120394
1
120404
null
1
31
Given different long documents of the same type, e.g. certain type of report, I need to identify certain items within the report, such as certain item's amount, the name of the certain person etc. How should I frame this problem under nlp? And what are the general approaches? I think the key challenge here is the same type of information will be in different part of the document in different documents. And the documents are 30-40 pages long.
How to identify certain term in a long document with NLP?
CC BY-SA 4.0
null
2023-03-22T00:35:29.613
2023-03-22T11:15:03.447
null
null
65053
[ "nlp" ]
To me this looks like Named Entity Recognition (NER), more generally a sequence labeling problem. The typical approach is to train a custom NER model using a large sample of annotated data. - Pro: this is a very well known problem, there are multiple libraries which implement this. - Cons: based on the description of your problem it's not sure that annotating a large sample of documents is doable, especially if there are very few target terms.
What is considered short and long text in NLP (document similarity)
As Erwan said in the comments, it depends. In my experience, it depends specifically on two things: Tokenization method: The length of a document in number of tokens will vary considerably depending on how you split it up. Splitting your text into individual characters will result in a longer document than splitting it into sub-word units (e.g. WordPiece), which will still be longer than splitting on white space. Model: Vanishing gradients aside, an RNN doesn't care how long the input text is, it will just keep chugging along. Transformers, however, are limited. BERT can realistically handle sequences of up to 512 WordPiece units, while the LongFormer claims to handle sequences of up to 32k units (given sufficient compute resources). Thus your documents of 10 - 600 tokens would be long for BERT but short for the LongFormer. Whether you should treat documents of length 10 differently from those of length 600 is not something I can answer without knowing the details of your specific task. Intuitively, I doubt a very short document would ever be very similar to a much longer one, simply because it likely contains less content.
120459
1
120460
null
0
27
I want to make a TensorFlow model that, given features $x$ and labels $y$ such that $y_i = ax_i^2+bx_i+c$, predicts reasonably well the equation. ``` x = np.arange(-1000, 1000, 0.74) y = 1.3*x**2 + 5.3*x + 4 ``` Now, here is the model: ``` model = tf.keras.Sequential([ tf.keras.layers.Dense(64), tf.keras.layers.Dense(16), tf.keras.layers.Dense(1, activation="relu"), ]) model.compile(loss="mae", optimizer=tf.keras.optimizers.Adam(learning_rate = 0.01)) history = model.fit(tf.expand_dims(x, axis=-1), y, epochs = 100) ``` However, the model does not predict a parabola, but a straight line, as you see in the picture: [](https://i.stack.imgur.com/v1NnP.png) I've tried to add more layers, to increase or decrease the learning rate, but nothing sorts any kind of effect. How can I fix it? Thanks.
Quadratic regression with TensorFlow is not working
CC BY-SA 4.0
null
2023-03-24T14:29:13.350
2023-03-24T15:12:06.443
null
null
147868
[ "machine-learning", "tensorflow", "regression" ]
You did not specify any activation function in your dense layers. When you stack multiple linear layers without any activation function, the end result is equivalent to a single linear layer (see [this other answer](https://datascience.stackexchange.com/a/89427/14675) for the mathematical proof). Therefore, the functions your network can learn are basically linear, with a final rectification from the ReLU. You may add ReLU activations (or other activation, like tanh or sigmoid) to each of the intermediate layers to enable your network to model non-linear functions. I would like to point out that when someone says "quadratic regression", it usually means that you are doing linear regression with extra variables computed by squaring the other variables. This is not what you are doing. You are modelling the quadratic function as if it was a black box, without using your knowledge of the underlying process.
How to perform regression on image data using Tensorflow?
I think the issue is mostly with your network architecture. You are using only one convolutional layers and you are using all sigmoid activiations. Adding more convolutional layers, changing the activations from sigmoid to relu, and changing the optimizer to Adam gives me a loss below 5 after 30 epochs: ``` model = tf.keras.Sequential([ tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.MaxPooling2D(2), tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.MaxPooling2D(2), tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.Conv2D(3, 3, activation='relu'), tf.keras.layers.Flatten(), tf.keras.layers.Dense(units=512, activation='relu'), tf.keras.layers.Dense(units=256, activation='relu'), tf.keras.layers.Dense(units=64, activation='relu'), tf.keras.layers.Dense(units=1) ]) model.compile(loss='mean_squared_error', optimizer="adam") history = model.fit(questions, solutions, epochs=30, batch_size=200, verbose=1) ``` Which gives the following training output for the last 5 epochs: ``` Epoch 25/30 50/50 [==============================] - 3s 54ms/step - loss: 4.8650 Epoch 26/30 50/50 [==============================] - 3s 57ms/step - loss: 5.5044 Epoch 27/30 50/50 [==============================] - 3s 56ms/step - loss: 6.0381 Epoch 28/30 50/50 [==============================] - 3s 55ms/step - loss: 4.7235 Epoch 29/30 50/50 [==============================] - 3s 54ms/step - loss: 4.4355 Epoch 30/30 50/50 [==============================] - 3s 55ms/step - loss: 4.1494 ```
120483
1
120489
null
0
52
I'm trying to follow a tutorial about Tensorflow in Python and computer vision. In this exercise I'm using a pre-trained model (InceptionV3) and some image augmentation about a datasets of humans vs horses. I have the `training` and `validation` folders with the datasets, and a `test` folder that I wanted to use to manually test the model on my images. When I call the `fit` method on the model this seems to be working fine, as you can see from the prints on the terminal it has a pretty high accuracy even on the validation dataset. However when I try to manually predict some test images on my machine something is off. I'm getting always 1 as prediction for whatever image I try to make the model predict. I have also tryed predict the same images from the validation dataset which it should have a 93% accuracy, but I'm always getting 1 (which is the class human). Can you help me understand what I'm doing wrong? This is my code and the output from the terminal (the list of the files is cropped but the result is 1 for everyone of them) ``` import numpy as np import os import const import stopper from keras import Model from keras.applications.inception_v3 import InceptionV3, layers from keras_preprocessing.image import ImageDataGenerator from keras_preprocessing import image from keras.optimizers import RMSprop TRAINING_DIR: str = const.PROJECT_PATH + '/datasets/horse-or-human/training' TEST_DIR: str = const.PROJECT_PATH + '/datasets/horse-or-human/test' VALIDATION_DIR: str = const.PROJECT_PATH + '/datasets/horse-or-human/validation' IMG_DIM: int = 300 if __name__ == '__main__': training_generator = ImageDataGenerator( rescale=1. / 255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, shear_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest' ).flow_from_directory(TRAINING_DIR, target_size=(IMG_DIM, IMG_DIM), class_mode='binary') validation_generator = ImageDataGenerator(rescale=1 / 255) \ .flow_from_directory(VALIDATION_DIR, target_size=(IMG_DIM, IMG_DIM), class_mode='binary') # configure the pre-trained model pre_trained_model = InceptionV3(input_shape=(IMG_DIM, IMG_DIM, 3), include_top=False, weights=None) pre_trained_model.load_weights(f'{const.PROJECT_PATH}/pre-trained/inception_v3_weights.h5') for layer in pre_trained_model.layers: layer.trainable = False last_output = pre_trained_model.get_layer('mixed7').output x = layers.Flatten()(last_output) x = layers.Dense(1024, activation='relu')(x) x = layers.Dense(1, activation='sigmoid')(x) # attach the pre-trained model to your network model = Model(pre_trained_model.input, x) model.compile(optimizer=RMSprop(learning_rate=0.0001), loss='binary_crossentropy', metrics=['accuracy']) model.fit(training_generator, epochs=50, callbacks=[stopper.CheckStopTraining()], validation_data=validation_generator) # try to predict my test images for file_name in os.listdir(TEST_DIR): if not file_name.startswith("."): # Prevent to load hidden files that break the code img = os.path.join(TEST_DIR, file_name) img = image.load_img(img, target_size=(IMG_DIM, IMG_DIM)) img = image.img_to_array(img) img = np.expand_dims(img, axis=0) img = np.vstack([img]) prediction = model.predict(img) print(f'{file_name} is {prediction[0]}') ``` [](https://i.stack.imgur.com/dzwcn.png)
Python Tensorflow - Predict human vs horses images always same value
CC BY-SA 4.0
null
2023-03-25T22:21:43.717
2023-03-26T11:07:44.817
2023-03-25T22:23:35.997
148317
148317
[ "python", "tensorflow", "convolutional-neural-network" ]
I solved the issue. I was dividing by 255 both the training and validation generators, but not the images that I was loading from the computer. I changed this line `img = image.img_to_array(img)` to this `img = image.img_to_array(img) / 255` And now is working. Hope it can help someone else
predict gives the same output value for every image (Keras)
When all the predictions are giving exact the same value you know that your model is not learning thus something is wrong! In your case the problem is having the last dense layer with the softmax AND the sigmoid activation. ``` model.add(keras.layers.Dense(1, activation=tf.nn.softmax)) model.add(keras.layers.Activation('sigmoid')) ``` This is creating a conflict where the softmax is outputting a 1 (since there is only one node) and the sigmoid takes this 1 and computing its sigmoid value gives: > 1/(1+exp(-1)) = 0.731058 And there is our friend! To solve this, you just need to remove the last activation layer, and change the softmax for a sigmoid since your output is binary: ``` model = keras.Sequential() model.add(keras.layers.Conv2D(16, [3,3], activation='relu', padding='same')) model.add(keras.layers.Conv2D(32, [3,3], activation='relu', padding='same')) model.add(keras.layers.Conv2D(64, [3,3], activation='relu', padding='same')) model.add(keras.layers.BatchNormalization()) model.add(keras.layers.Dropout(0.15)) model.add(keras.layers.Activation('relu')) model.add(keras.layers.Flatten()) model.add(keras.layers.Dense(50)) model.add(keras.layers.Dense(1, activation=tf.nn.sigmoid)) #model.add(keras.layers.Activation('sigmoid')) ``` This should work!
120488
1
120575
null
0
27
I've been coding YOLOv1 from scratch, are there any good papers which explain and/or give code (or pseudocode) for mean average precision? I searched but couldn't find good ones
research papers on mean average precision
CC BY-SA 4.0
null
2023-03-26T07:01:54.963
2023-04-02T13:18:20.100
null
null
142424
[ "deep-learning", "tensorflow", "object-detection" ]
Here is paper which answer your question [Beinan Wang - A PARALLEL IMPLEMENTATION OF COMPUTING MEAN AVERAGE PRECISION](https://arxiv.org/pdf/2206.09504v1.pdf) it has the implementation in pseudocode.
Why does my mean average precision metric show as 0.000e+00?
I would look into whether your loss function is correct. Mean square error is a regression metric (and precision is a classification metric). Something like [categorical cross entropy](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/CategoricalCrossentropy) is probably more suited. Eitherway as a sanity check I would you can always run a model for say 10 epochs. Then run predictions and calculate the precision manually (or with [sklearns builtin method](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html).