Id
stringlengths
2
6
PostTypeId
stringclasses
1 value
AcceptedAnswerId
stringlengths
2
6
ParentId
stringclasses
0 values
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
34
27.1k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
1 value
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
2
6
OwnerUserId
stringlengths
2
6
Tags
sequencelengths
1
5
Answer
stringlengths
32
27.2k
SimilarQuestion
stringlengths
15
150
SimilarQuestionAnswer
stringlengths
44
22.3k
120500
1
120501
null
0
17
I currently have a dataset that consists of survey data that has several columns that have answers dependent on the previous question. For example, I may have a question that says "Did you take medication in the previous year for heart disease" coded as a binary variable, while the next column will have the answer for "If so, what kind?" and a multilevel categorical variable. My question is, is there a best practice for handling this kind of information? I have considered adding a new level to the categorical data that consists of "Not applicable", but currently the uncleaned data is coded as null. Leaving it as null seems incorrect however, as the data would be missing at random (MAR).
Best practice for variables that only have answer if yes in previous column
CC BY-SA 4.0
null
2023-03-26T19:50:45.493
2023-03-26T21:07:53.720
2023-03-26T20:01:01.460
116854
116854
[ "data-cleaning", "categorical-data", "missing-data" ]
Those two questions should be encoded together to capture the conditional relationship. For example, the single feature could have the following categorical levels: `no_medication`, `yes_medication_drug_a`, `yes_medication_drug_b`, `yes_medication_drug_other`.
how to handle values that only appear once in a column?
Probably the best thing to do is use domain knowledge to relabel those into the larger categories. You may be able to replace domain knowledge with an imputation method: remove the rare labels, then fill the newly missing data using the other columns. Finally, the quickest sound idea, which you and Brian have both mentioned: just lump them into an "other" category. I wouldn't just drop them, or predicting on future examples outside the surviving categories will be harmed (both in that the model won't understand them, and that your package will have to know how to even pass them to the model).
120504
1
120510
null
1
587
I am a Data Engineer, and I am currently assigned a task to refactor an outdated code and rectify any bugs present. However, I am unable to comprehend the code written in the existing codebase. Furthermore, the developers who worked on this codebase did not provide any documentation. Consequently, I am inquiring if there is a feasible method to convert the entire codebase into an extensive text document. Subsequently, I would like to utilize ChatGPT to translate the codebase into a comprehensive document(very long text, with folder structure tree and code inside src) that I can use to embedding. I do not require an in-depth explanation of the code; rather, I am seeking a more abstract-level understanding, such as the purpose of specific files, the functionality of particular folders, etc.
Can I use LLM to explain codebase?
CC BY-SA 4.0
null
2023-03-27T00:39:56.700
2023-03-27T08:51:54.313
null
null
139391
[ "nlp", "data-mining", "word-embeddings", "language-model", "gpt" ]
Sure, many people have done that. You can also ask it to add comments or try to find bugs. Just take into account that LLMs are known for generating bullshit, so the explanations could be mere fabrications and the generated code may not work (in evident or subtle ways). I myself have tried chatGPT for generating code, but I had to iterate a few times until I got it working. I suggest you prepare some unit tests and integration tests to ensure that everything is working as before chatGPT's suggested changes. Take into account that the amount of text/code an LLM can use as context is not that large, so you may need to ask multiple times regarding different parts of the code base. There may also be privacy concerns regarding the fact that you are basically sending the source code of your company to a third party, which is something many employers would frown upon.
Flow of machine learning model including code
The purpose of a machine learning model is to make predictions on real-world data that isn’t known at model training time. As such, it’s best practice to always do a train-test split at the very beginning of any project, and only use the training data for training the model. The test data should not be used at all until your model is fully trained. To add to this, when tuning the model’s hyperparameters there is an additional subset of the training data used for validation, which is not used for training but for evaluating performance during training. You create train-test-splits of your input data, run through all of your models, and use your aggregate cross-validation score to choose one or two models to concentrate on improving. Based on your results, it looks like logistic regression is getting the highest score, and is probably a good fit for this type of problem – predicting whether an instance of the data is a member of the target or not (“stroke” or “not stroke”). Once this is done, you can tune your model’s hyperparameters (using GridSearch like you’re doing for example) to determine the best parameters for things like regularization (the “C” parameter). Then, and only then, when you have selected your model, tuned the hyperparameters, and trained on your training data only, then you evaluate performance on your test data. For the evaluation, it’s good to understand the performance of your model and what that represents, that’s what your metrics at the end are for. Precision is percentage of true positives over true positives and false positives, and recall is true positives over true positives plus false negatives. F1 score is the harmonic mean of these two values, ROC is the performance of the model at different classification thresholds. If the purpose of the model is to predict strokes, do you want a higher precision which would mean you detect more potential strokes at the risk of higher false positives? Or a higher recall which would mean all the instances classified as high risk of stroke are more likely to be high risk of stroke but at the cost of potentially missing some? Hth,
120543
1
120544
null
0
29
my training code: ``` import torch from torch.utils.data import DataLoader from torch.utils.tensorboard import SummaryWriter from torchvision import datasets, transforms from CNN import CNNmodel SEED = 5 device = "cuda" if torch.cuda.is_available() else "cpu" BATCH_SIZE = 16 torch.manual_seed(SEED) torch.cuda.manual_seed(SEED) train_transform = transforms.Compose([ transforms.TrivialAugmentWide(num_magnitude_bins=8), transforms.ToTensor() ]) test_transform = transforms.Compose([ transforms.ToTensor() ]) train_data = datasets.MNIST( root="data", train=True, download=True, transform=train_transform ) test_data = datasets.MNIST( root="data", train=False, download=True, transform=test_transform ) train_dataloader = DataLoader( train_data, batch_size=BATCH_SIZE, shuffle=True ) test_dataloader = DataLoader( test_data, batch_size=BATCH_SIZE, shuffle=False ) channel_num = train_data[0][0].shape[0] model = CNNmodel(in_shape=channel_num, hidden_shape=16, out_shape=len(train_data.classes)).to(device) optimizer = torch.optim.SGD(params=model.parameters(), lr=0.01) loss_fn = torch.nn.CrossEntropyLoss() epochs = 20 writer = SummaryWriter(log_dir="runs\\CNN_MNIST") for epoch in range(epochs): train_loss = 0 train_acc = 0 model.train() for batch, (X, y) in enumerate(train_dataloader): X, y = X.to(device), y.to(device) X = torch.reshape(X, (BATCH_SIZE, channel_num, 28, 28)) y_pred = model(X) loss = loss_fn(y_pred, y) train_loss += loss.item() optimizer.zero_grad() loss.backward() optimizer.step() y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1) train_acc += (y_pred_class == y).sum().item()/len(y_pred) train_loss /= len(train_dataloader) train_acc /= len(train_dataloader) test_loss = 0 test_acc = 0 model.eval() with torch.inference_mode(): for batch, (X, y) in enumerate(test_dataloader): X, y = X.to(device), y.to(device) y_pred = model(X) loss = loss_fn(y_pred, y) test_loss += loss.item() y_pred_class = torch.argmax(torch.softmax(y_pred, dim=1), dim=1) test_acc += (y_pred_class == y).sum().item()/len(y_pred) test_loss /= len(test_dataloader) test_acc /= len(test_dataloader) writer.add_scalars( main_tag="Loss", tag_scalar_dict={"train_loss": train_loss, "test_loss": test_loss }, global_step=epoch ) writer.add_scalars( main_tag="Accuracy", tag_scalar_dict={"train_acc": train_acc, "test_acc": test_acc }, global_step=epoch ) writer.close() torch.cuda.empty_cache() print(f"epoch={epoch}, train loss={train_loss}, train acc={train_acc}, test loss={test_loss}, test acc={test_acc}\n") torch.save(model.state_dict(), f="CNN.pth") ``` my results: epoch=0, train loss=1.2992823556999364, train acc=0.5814166666666667, test loss=0.1535775218948722, test acc=0.9617 epoch=1, train loss=0.7351227536817392, train acc=0.7735333333333333, test loss=0.0957084314838983, test acc=0.9711 epoch=2, train loss=0.6108905077829957, train acc=0.8069666666666667, test loss=0.10527049974631518, test acc=0.968 epoch=3, train loss=0.5531635082634787, train acc=0.8209333333333333, test loss=0.09478655792670325, test acc=0.9719 epoch=4, train loss=0.5146081379964947, train acc=0.8315666666666667, test loss=0.10086005784235895, test acc=0.9717 epoch=5, train loss=0.48089857985948525, train acc=0.8415166666666667, test loss=0.07805026951334439, test acc=0.9755 epoch=6, train loss=0.46410337663746126, train acc=0.8458, test loss=0.06370123700092081, test acc=0.979 epoch=7, train loss=0.45169676643597584, train acc=0.8508333333333333, test loss=0.06549387282291427, test acc=0.9784 epoch=8, train loss=0.4308121643635134, train acc=0.8575, test loss=0.07395816469893325, test acc=0.9764 epoch=9, train loss=0.42585810295939447, train acc=0.8576166666666667, test loss=0.060803520213114096, test acc=0.9809 epoch=10, train loss=0.412179026115189, train acc=0.8625, test loss=0.05902050706697628, test acc=0.9811 epoch=11, train loss=0.4062708326317991, train acc=0.8628666666666667, test loss=0.05916510981819592, test acc=0.982 epoch=12, train loss=0.3950844133876264, train acc=0.8676666666666667, test loss=0.051657470285263844, test acc=0.9839 epoch=13, train loss=0.3960405339717865, train acc=0.8668666666666667, test loss=0.05090424774668645, test acc=0.9838 epoch=14, train loss=0.3826637831449664, train acc=0.8697333333333334, test loss=0.049632979356194845, test acc=0.9839 epoch=15, train loss=0.38186972920044016, train acc=0.87205, test loss=0.05163152083947789, test acc=0.9828 epoch=16, train loss=0.37976737998841953, train acc=0.8736166666666667, test loss=0.054158556177618444, test acc=0.9823 epoch=17, train loss=0.3711047379902874, train acc=0.8751333333333333, test loss=0.055461415114835835, test acc=0.9816 epoch=18, train loss=0.369529847216544, train acc=0.87475, test loss=0.046305917761620366, test acc=0.9861 epoch=19, train loss=0.3628049560392275, train acc=0.8773833333333333, test loss=0.05091290192245506, test acc=0.9846
Is it good if during training the model my test accuracy much higher then train accuracy. How can i prevent this?
CC BY-SA 4.0
null
2023-03-28T15:22:25.120
2023-03-28T15:43:32.943
null
null
148358
[ "machine-learning", "pytorch" ]
Such a large gap suggests a problem with your test data. Maybe there is a data leak (i.e. part of your test data is leaked directly or indirectly into the training data).
Testing accuracy is higher than training accuracy
Your testing dataset is strongly imbalanced. You have 82 samples in the positive class and only 3 classes in the negative class. By simply guessing "everything positive" your model would achieve 96.5% accuracy. This is a common problem in unbalanced datasets. I don't know what your data is exactly, so it is difficult to make a precise suggestion as to what you should change, but calculating the [Balanced Accuracy](https://en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion) which is the accuracy of the individual classes weighted equally instead of by their contribution, might be a good start. Evaluating your model's performance based on precision and recall might be a good option, too. I might add however, that just 3 samples in the negative class is probably too little to make a good assumption about the performance of your model anyway.
120547
1
120548
null
0
45
I am building a model for the purpose of forcasting when someone is going into a stressful state. I am using the [WESAD](https://archive.ics.uci.edu/ml/datasets/WESAD+%28Wearable+Stress+and+Affect+Detection%29) dataset which has electrodermal activity (EDA) data on 11 subjects. I take this and use Neurokit2 to clean and extract features from the raw EDA data. The end result is that I have a list that stores each subject in the original dataset with 3 features and 1 label. The label is binary [0,1] and the features are normalized. I only have experience running a timeseries model using a single factor and single subject. How would I correctly do the train-test split for multiple features on multiple subjects? Below is my code to create data generators for neural networks on one feature and one subject. Should I loop through each subject and do the same process as below? If I do as I suggest, how would I put this into a LSTM model? ``` from keras.preprocessing.sequence import TimeseriesGenerator # Define the batch size batch_size = 64 # Define the number of features and targets num_features = 1 num_targets = 1 # Random State random_state = 42 # Train Test Split from sklearn.model_selection import train_test_split # Validation split X_dat, X_val, y_dat, y_val = train_test_split(subsampled_data, delayed_labels, test_size = 0.2, random_state=random_state) # Train test split X_train, X_test, y_train, y_test = train_test_split(X_dat, y_dat, test_size = 0.2, random_state = random_state) # Normalize the data from sklearn.preprocessing import StandardScaler # create the StandardScaler object scaler = StandardScaler() # fit the scaler on the training data X_train_scaled = scaler.fit_transform(X_train.values.reshape(-1,1)) # transform the validation data X_val_scaled = scaler.transform(X_val.values.reshape(-1,1)) # transform the test data X_test_scaled = scaler.transform(X_test.values.reshape(-1,1)) # TimeSeriesGenerator parameters shuffle = True # Data Generator train_data_gen = TimeseriesGenerator(X_train_scaled, y_train, length=sequence_length, batch_size=batch_size) val_data_gen = TimeseriesGenerator(X_val_scaled, y_val, length=sequence_length, batch_size=batch_size) test_data_gen = TimeseriesGenerator(X_test_scaled, y_test, length=sequence_length, batch_size=batch_size) ```
How to make Train-Test split on multivariate timeseries data
CC BY-SA 4.0
null
2023-03-28T17:26:12.867
2023-03-28T18:44:54.530
2023-03-28T18:08:26.147
2727
2727
[ "tensorflow", "time-series" ]
Scikit-learn's [model_selection.TimeSeriesSplit](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.TimeSeriesSplit.html) is designed to appropriately split time series data. The result will include indices that can be used to reference the features, no matter how many features there are.
Train-Test split for Time Series Data to be used for LSTM
the syntax `arr[:,:-1]` selects all rows and every column except the last one. Python can use negative indexing, but it's inclusive-exclusive such as `[a,b)`: inclusive of `a`, exclusive of `b`. If you don't use the `:` operator, such as `arr[:,-1]`, then it simply selects the entire last column. So in the context of your example, the last column is the value to be regressed/classified/etc according to the previous columns training data. ``` >>> import numpy as np >>> arr = np.random.randn(5,5) >>> print(arr) [[-0.86690967 -0.63959234 0.99754053 -0.24828822 0.5346927 ] [ 0.6174709 2.16558841 -1.28983554 1.15387215 0.64630439] [ 0.35104248 -0.54240157 0.80377977 -0.9447689 -0.08145433] [ 0.61195442 0.09407687 0.39065215 -0.8887228 -1.63845254] [-1.58212796 -0.46017275 -0.2065184 0.44879872 -0.95037541]] >>> print(arr[:,:-1]) [[-0.86690967 -0.63959234 0.99754053 -0.24828822] [ 0.6174709 2.16558841 -1.28983554 1.15387215] [ 0.35104248 -0.54240157 0.80377977 -0.9447689 ] [ 0.61195442 0.09407687 0.39065215 -0.8887228 ] [-1.58212796 -0.46017275 -0.2065184 0.44879872]] >>> print(arr[:,-1]) [ 0.5346927 0.64630439 -0.08145433 -1.63845254 -0.95037541] ``` Notice that the final print is actually the last column of `arr`, but because it is a 1D array, it appears as a row-vector rather than column-vector.
120553
1
120558
null
1
86
I've recently read that Standard Scaler functions best in situations where the distribution of the features are approximately normal. MinMaxScaler works in a way that it preserves the features' original shape. Both of them are sensitive to outliers as sklearn itself states. But I can't seem to get RobustScaler. I've read people saying that it reduces the effect of outliers in the distribution, so if one considered the outliers shouldn't have an effect on the data, one should use RobustScaler. But I don't think that makes much sense because if one would think outliers shouldn't impact the data, then it would make more sense to remove then before performing the scaling. I've also read people saying that it doesn't reduce the effect of the outlier, but it doesn't let the distribution get distorted like MinMax and Standard Scaler do. Therefore, I'm having a hard time understanding situations in which it would make sense to use different types of scalers, specially when it comes to RobustScaler, should I use it when I have outliers or when I want to desconsider the effect of those outliers on the data?
StandardScaler and MinMaxScaler vs RobustScaler
CC BY-SA 4.0
null
2023-03-29T03:08:01.717
2023-03-29T06:32:08.350
null
null
148415
[ "scikit-learn", "feature-scaling" ]
Extreme values will usually cause problems with many methods/models, but that does not mean you should remove them. If your model does not fit your data, change your model, not your data. Extreme values will impact the mean/SD or min/max of the data which will then have an effect on data normalization. An extremely high value will cause your variable to be in [0,1] after min/max normalization, but most of your data will be close to 0, so it will have an asymmetric distribution compared to other similar variables without extremes. Likewise an extremely high value will cause the SD to be inflated and this variable will be much more narrowly distributed around 0 as a result of standardization, even though the total SD will be 1. Robust normalization, which uses the median and the median absolute deviation, will make sure that your variables are much more comparable with each other, that is that most data will have mass at roughly the same place, but the extreme values will still be extreme. Hopefully these extreme values won't affect the normalization parameters and change the distribution of all the data, and this is what robust normalization tries to achieve.
What is "data scaling" regarding StandardScaler()?
I will use k-Nearest Neighbor algorithm to explain why we must do scaling as a preprocessing step in most machine learning algorithms. Let's say you are trying to predict if I transaction is fraudulent or not, that is, you have a classification problem and you only have two features: value of transaction and time of the day. Both variables have different magnitudes, while transactions can vary from 0 to 100000000 (It is just an example), and time of the day between 0 to 24 (let's use only hours). So, while we are computing the nearest neighbor, using euclidean distance, we will do ``` distance_class = sqrt( (new_value_transaction - old_value_transaction)**2) + (new_time_of_day - old_time_of_day)**2) ) ``` Where old is the reference to our train data and new is related to a new transactions we want to predict the class. So now you can see that transactions will have a huge impact, for example, ``` new_value_transaction = $100 new_time_of_day = 10 old_value_transaction = $150 new_time_of_day = 11 class_distance = sqrt(($50)**2) + (1)**2) ``` Now, you have no indication that transaction value is more important than time of the day, that is why we will scale our data. Between the alternatives, we can have a lot of different, such as MinMaxScaler, StandardScaler, RobustScaler, etc. Each of them will treat the problem different. To be honest? Always try to use at least two of them to compare results. You can read more about in the sklearn documentation: [https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing](https://scikit-learn.org/stable/modules/preprocessing.html#preprocessing) I hope you got the feeling why we should use standardization techniques. Let me know if you have any further questions. To complement, here is a visual explanation of what I explained above. [](https://i.stack.imgur.com/myMkK.png) Credit: [https://www.youtube.com/watch?v=d80UD99d4-M&list=PLpQWTe-45nxL3bhyAJMEs90KF_gZmuqtm&index=10](https://www.youtube.com/watch?v=d80UD99d4-M&list=PLpQWTe-45nxL3bhyAJMEs90KF_gZmuqtm&index=10) In the video they give a better explanation. Also, I highly recommend this course, the guys are amazing.
120574
1
120580
null
0
152
I've noticed that UMAP is often used in combination with other clustering algorithms, such as K-means, DBSCAN, HDBSCAN. However, from what I've understood, UMAP can be used for clustering tasks. So why I've noticed people using it primarily as a dimensionality reduction technique? Here an example of what I'm talking about: [https://medium.com/grabngoinfo/topic-modeling-with-deep-learning-using-python-bertopic-cf91f5676504][1] Am I getting something wrong? Can UMAP be used for clustering tasks alone? What are the benefits of using it in combination with other clustering algorithms?
Why is UMAP used in combination with other Clustering Algorithm?
CC BY-SA 4.0
null
2023-03-29T20:36:47.753
2023-03-30T07:42:51.290
null
null
148448
[ "clustering", "unsupervised-learning", "dbscan", "umap" ]
As mentioned Noe, UMAP is 100% for dimensional reduction, ONLY to group similar data and separate different ones, THEN we apply a clustering algorithm such as DBSCAN to identify those groups in the projected space. Multi-dimensional reduction is the most difficult part because you need to detect similarities and differences among plenty of variables and then project them in a lower dimensional space, generally in 2D. PCA was initially one of the first dimensional reduction algorithm (even if it is not exactly a dimensional reduction one) but it has a problem: it is linear. UMAP is a non-linear reduction algorithm, meaning it can detect complex correlations between features. Therefore, it uses [Riemannian manifolds](https://en.wikipedia.org/wiki/Riemannian_manifold) that are useful for representing complex, non-linear geometries that are difficult to capture with simple Euclidean distance measures. UMAP constructs a weighted graph representation of the data, where the weights between each pair of data points are determined by a distance measure that considers the geometry of the Riemannian manifold. This graph is then optimized using a gradient-based algorithm to produce a low-dimensional embedding that preserves the high-dimensional geometric structure of the data. More info: [https://pair-code.github.io/understanding-umap/](https://pair-code.github.io/understanding-umap/)
What is clustering used for?
Labeling data is not always an easy task. There are occasion that the data in hand does not have label and you need to make a model using them. You have to find the similarities and differences in your input data. Clustering approaches try to find these similarities and differences to find similar data. Also they are used as a pre-processing before doing supervised classification. In cases that the input data does not have any label, employing clustering approaches can be a way to label the data and use them for training supervised models.
120588
1
120624
null
1
24
I have an "image" of NxN dimensions in m channels (for reference, m is less than 17) in my training set and validation set. I would like to compare images in the training set with those in the validation set and get a similarity index for each image (which I can then try to turn into some sort of statistical measure for the similarity for the whole set). I saw that there is imagehash tool for comparing RGB images but I was wondering if there is a package for comparing images in m channels? Thanks in advance!
Comparing images in N channels
CC BY-SA 4.0
null
2023-03-30T13:58:16.387
2023-04-03T20:12:25.903
2023-03-30T15:52:50.447
148064
148064
[ "machine-learning", "dataset", "data", "similarity", "image-size" ]
If the image can be loaded as a PIL instance, then Python's [imagehash](https://github.com/JohannesBuchner/imagehash/blob/master/examples/hashimages.py) library will be able to perform similarity-based hashing.
Training images with multiple channels
I don't know what kind of image you have, 16 channels?! oh boy :) Anyway, if they are images, the first one is better. The reason is that in the second approach you are somehow unrolling the input signal. By doing so, you are removing the information about the locality. You are removing the information of near adjacent inputs. Convolutional neural nets attempt to find these kinds of features. As an example, consider the `MNIST` dataset. You can learn using CNN and MLP but the former is used due to the fact that CNNs care about patterns which are somehow replicated in different parts of inputs. If they are not images and you are aware that adjacent pixels or inputs are related, again you should exploit CNNs. Consider that the convolutional layers are CNNs are for extracting appropriate features. The classification task is done using dense layers in CNNs. About efficiency, consider two points. Graphic cards are called SIMD computers. It stands for single instruction multiple data. Matrix operations are done using GPUs very efficiently as the name of Graphic cards implies. Consequently, Dense layers are very fast in GPUs compared to CPUs. The other point is about parallel programming. Each filter in convolutional layers are independent; consequently, they can be applied using paralyzed instructions. Again appropriate GPUs are very good at this too. I know I've said two things but there are actually three things to be kept in mind. Forget all of the above mentioned points! The most important thing to consider is the memory and the bus. There are situations that you don't have a graphic card with more than 6 Gig of memory. In those situations, I really prefer the last generations of CPUs instead of GPUS. The reason is that you have to deal with the limitations of memory. In your case consider that if you use dense layers exactly after inputs, the number of parameters will be science fictional although You are supposed to use CNNs.
120589
1
120590
null
0
25
Suppose I have a feature composed by 784 numbers, and I want to use it as input of a neural network implemented from scratch whose first layer has 64 neurons. How can I put 784 numbers in 64 neurons?
How to fit n features in a number of neurons smaller than n
CC BY-SA 4.0
null
2023-03-30T15:31:14.303
2023-03-30T15:38:18.830
null
null
147868
[ "neural-network", "classification", "features" ]
To reduce the number of features from the raw data to 64 input neurons, you can perform [dimensionality reduction](https://en.wikipedia.org/wiki/Dimensionality_reduction). There are various methods to do this, here are some common ones that are easy to implement. - Principle Component Analysis - Autoencoders (basically another neural network that you can train unsupervised) - Random projections (Make a 784x64 random matrix and multiply)
Is there a maximum limit to the number of features in a Neural Network?
It highly depends on your data. If it's image, I guess it is somehow logical but if not I recommend you constructing covariance matrix and tracking whether features have correlation or not. If you see many features are correlated, it is better to discard correlated features. You also can employ `PCA` to do this. Correlated features cause larger number of parameters for neural network. Also I have to say that maybe you can reduce the number of parameters if your inputs are images by resizing them. In popular nets the length and height of input images are usually less than three hundred which makes the number of input features `90000`. Also you can employ max-pooling after some convolution layers, if you are using convolutional nets, to reduce the number of parameters. Refer [here](https://datascience.stackexchange.com/a/26413/28175) which maybe helpful.
120601
1
120602
null
3
231
There are various sources on the internet that claim that BERT has a fixed input size of 512 tokens (e.g. [this](https://datascience.stackexchange.com/q/89684/141432), [this](https://stackoverflow.com/q/58636587/9352077), [this](https://www.saltdatalabs.com/blog/bert-how-to-handle-long-documents), [this](https://datascience.stackexchange.com/q/113489/141432) ...). This magical number also appears in the BERT paper ([Devlin et al. 2019](https://arxiv.org/pdf/1810.04805.pdf)), the RoBERTa paper ([Liu et al. 2019](https://arxiv.org/pdf/1907.11692.pdf)) and the SpanBERT paper ([Joshi et al. 2020](https://www.cs.princeton.edu/%7Edanqic/papers/tacl2020.pdf)). The going wisdom has always seemed to me that when NLP transitioned from recurrent models (RNN/LSTM Seq2Seq, Bahdanau ...) to transformers, we traded variable-length input for fixed-length input that required padding for shorter sequences and could not extend beyond 512 tokens (or whatever other magical number you want to assign your model). However, come to think of it, all the parameters in a transformer (Vaswani et al. 2017) work on a token-by-token basis: the weight matrices in the attention heads and the FFNNs are applied tokenwise, and hence their parameters are independent of the input size. Am I correct that a transformer (encoder-decoder, BERT, GPT ...) can take in an arbitrary amount of tokens even with fixed parameters, i.e., the amount of parameters it needs to train is independent of the input size? I understand that memory and/or time will become an issue for large input lengths since attention is O(n²). This is, however, a limitation of our machines and not of our models. Compare this to an LSTM, which can be run on any sequence but compresses its information into a fixed hidden state and hence blurs all information eventually. If the above claim is correct, then I wonder: What role does input length play during pre-training of a transformer, given infinite time/memory? Intuitively, the learnt embedding matrix and weights must somehow be different if you were to train with extremely large contexts, and I wonder if this would have a positive or a negative impact. In an LSTM, it has negative impact, but a transformer doesn't have its information bottleneck.
Do transformers (e.g. BERT) have an unlimited input size?
CC BY-SA 4.0
null
2023-03-31T09:47:49.863
2023-03-31T11:17:17.930
null
null
141432
[ "machine-learning", "nlp", "transformer", "bert", "hyperparameter" ]
You are right that a transformer can take in an arbitrary amount of tokens even with fixed parameters, excluding the positional embedding matrix, whose size directly grows with the maximum allowed input length. Apart from memory requirements (O(n²)), the problem transformers have regarding input length is that they don't have any notion of token ordering. This is why positional encodings are used. They introduce ordering information into the model. This, however, implies that the model needs to learn to interpret such information (precomputed positional encodings) and also learn such information (trainable positional encodings). The consequence of this is that, during training, the model should see sequences that are as long as those at inference time because for precomputed positional encodings it may not correctly handle the unseen positional information and for learned positional encodings the model simply hasn't learned to represent them. In summary, the restriction in the input length is driven by: - Restrictions in memory: the longer the allowed input, the more memory is needed (quadratically), which doesn't play well with limited-memory devices. - Need to train with sequences of the same length as the inference input due to the positional embeddings. If we eliminate those two factors (i.e. infinite memory and infinite-length training data), you could set the size of the positional embeddings to an arbitrarily large number, hence allowing arbitrarily long input sequences. Note, however, that due to the presence of the positional embeddings, there will always be a limit in the sequence length (however large it may be) that needs to be defined in advance to determine the size of the embedding matrix.
Variable input/output length for Transformer
Your understanding is not correct: in the encoder-decoder attention, the Keys and Values come from the encoder (i.e. source sequence length) while the Query comes from the decoder itself (i.e. target sequence length). The Query is what determines the output sequence length, therefore we obtain a sequence of the correct length (i.e. target sequence length). --- In order to understand how the attention block works maybe this analogy helps: think of the attention block as a Python dictionary, e.g. ``` keys = ['a', 'b', 'c'] values = [2, 7, 1] attention = {keys[0]: values[0], keys[1]: values[1], keys[2]: values[2]} queries = ['c', 'a'] result = [attention[queries[0]], attention[queries[1]]] ``` In the code above, `result` should have value `[1, 2]`. The attention from the transformer works in a similar way, but instead of having hard matches, it has soft maches: it gives you a combination of the values weighting them according to how similar their associated key is to the query. While the number of values and keys has to match, the number of queries is independent.
120618
1
120644
null
0
42
I want to do regression on a time series where my output variable is a in the time series. My I have a measurements of a time series $(x_1, x_2, \cdots, x_n)$ and want to predict the variable $y$ which is not a measurement of the time series. The obvious first option would be some simple linear regression, but that doesn't take into account the time series nature of the data. Time series methods that I'm looking at seem to be related to forecasting the next value in the series. What methods take into account the temporal nature of the data that aren't regression? EDIT I have a series of time series and I want to predict a value for each time series. The value I want to predict is a slightly strange value. It is a time to an event happening. so it is actually two values, did the event happen, and when did it happen? Thanks
Regression with time series data that isn't forcasting
CC BY-SA 4.0
null
2023-03-31T23:10:10.073
2023-04-02T09:45:09.640
2023-04-02T06:37:53.073
105267
105267
[ "time-series", "regression" ]
One way to approach the problem would be to first classify the time series according to whether the event did or did not happen. Then for those where the event did happen, run a regression to predict when it happened. While you can use standard classification and regression techniques for this, as you point out, these ignore the temporal dimension of the time series. There are many classifiers designed for classifying time series. The classic one is a one nearest neighbour classifier that uses dynamic time warping (DTW) as the distance measure (1-NN DTW). If you're using python, both the [sktime](https://github.com/sktime/sktime) and [tslearn](https://github.com/tslearn-team/tslearn) packages include implementations of 1-NN DTW (as well as other time series classifiers). Some more recent, state-of-the-art classifiers include [HIVE-COTE 2.0](http://arxiv.org/abs/2104.07551) and [MultiRocket](https://link.springer.com/10.1007/s10618-022-00844-1). There's been less research into non-forecasting regression (sometimes called extrinsic regression) for time series, however many time series classifiers can be fairly easily adapted for extrinsic regression. For instance, using 1-NN DTW, you can simply replace the 1-NN classifier with a 1-NN regressor. Tan et al.'s paper [Time series extrinsic regression](https://link.springer.com/10.1007/s10618-021-00745-9) benchmarks a number of time series extrinsic regression methods. If you are interested in deep learning methods, you might find Foumani et al.'s survey paper [Deep Learning for Time Series Classification and Extrinsic Regression: A Current Survey](http://arxiv.org/abs/2302.02515) useful (disclaimer: I'm a co-author of this paper).
Time series regression
Counting of events is probably best done as a separate software component that takes the output of your event classifier. The event classifier model would output for each time-step two probabilities: one for started and one for stopped. The easiest way to count the events to have a state machine that compares the probabilities against a fixed threshold to create discrete events. And once you have discrete events, counting is trivial. Pseudo-code below: ``` # returns a tuple with (new_state, event_name) # event_name can be None if no event occurred def next_event(state, start_prob, stop_prob): start_threshold = 0.5 stop_threshold = 0.5 if state == 'running': if stop_prob > stop_threshold: return ("parked", "stop") if state == 'stopped' or state == 'unknown': if start_prob > start_threshold: return ("running", "start") return (state, None) # no change ... # initialization start_counts = 0 state = 'unknown' ... # for every new time window X event_prob = model.predict(X) state, event = next_event(state, event_prob[0], event_prob[1]) if event is not None and event == 'start': start_counts += 1 .... ```
120653
1
120655
null
1
29
I have a simple linear function y = w0 + w1 * x, where w0 and w1 are weights, And I'm trying to implement a gradient descent for it. I wrote the function and tested in on the data(a dataset of two columns X and Y, Y is dependent on X). For the first few iterations it goes as intended, my gradient vector decreases and I expect it to reach preselected treshhold and stop there. But at some point vector starts to increase, I don't know what mistake I did in my code, please help me to find it. Thanks. Here's the code: ``` def squared_error(prediction, y): error = np.sum((prediction-y)**2)/len(y) return error def gradient_step(x, y, prediction): w0_derivative = np.sum(prediction-y) w1_derivative = np.sum((prediction-y)*x) return [w0_derivative, w1_derivative] def gradient_descent(x, y): errors = [] eps = 1e-6 step = 0.00001 current_error = 0 weights = np.random.rand(1,2)[0] L = len(x) t = 0 while np.linalg.norm(weights) > eps: print(t, weights, np.linalg.norm(weights), current_error) prediction = weights[0]+weights[1]*x current_error = squared_error(prediction, y) errors.append(current_error) gradient = gradient_step(x, y, prediction) weights[0] = weights[0] - step * 2/L * gradient[0] weights[1] = weights[1] - step * 2/L * gradient[1] t+=1 return t, weights ``` (If I can somehow hide the below output please clarify) And here's the sample output, you can see that on iteration 38 vector norm starts to increase thus it can't reach the stopping threshold. ``` iteration: 0 weights: [0.31865964 0.70571233] vector norm: 0.7743215427455836 current error: 0 iteration: 1 weights: [0.3182195 0.64808332] vector norm: 0.7219942084596928 current error: 539.9063798449935 iteration: 2 weights: [0.31792583 0.60922537] vector norm: 0.6871916691263193 current error: 261.86786860245604 iteration: 3 weights: [0.31773094 0.58302432] vector norm: 0.6639806537564518 current error: 135.4577412755645 iteration: 4 weights: [0.31760264 0.56535753] vector norm: 0.6484601525092629 current error: 77.98541136268015 iteration: 5 weights: [0.31751924 0.55344519] vector norm: 0.6380595931367428 current error: 51.85562952403169 iteration: 6 weights: [0.31746612 0.54541294] vector norm: 0.6310784575355897 current error: 39.97572738325473 iteration: 7 weights: [0.31743342 0.53999696] vector norm: 0.6263870177268759 current error: 34.57452859878733 iteration: 8 weights: [0.31741449 0.53634506] vector norm: 0.62323188578263 current error: 32.118870272836105 iteration: 9 weights: [0.31740483 0.53388265] vector norm: 0.6211090962595095 current error: 31.002400996852963 iteration: 10 weights: [0.31740144 0.53222227] vector norm: 0.6196807404591702 current error: 30.494793604390054 iteration: 11 weights: [0.31740226 0.5311027 ] vector norm: 0.6187198631788634 current error: 30.264005077205262 iteration: 12 weights: [0.31740593 0.53034777] vector norm: 0.6180738454010467 current error: 30.159072160707378 iteration: 13 weights: [0.31741152 0.52983871] vector norm: 0.6176399692755613 current error: 30.11135945832206 iteration: 14 weights: [0.3174184 0.52949544] vector norm: 0.6173490616027659 current error: 30.08966190842252 iteration: 15 weights: [0.31742615 0.52926396] vector norm: 0.6171545204075297 current error: 30.079792139481196 iteration: 16 weights: [0.3174345 0.52910785] vector norm: 0.6170249413446866 current error: 30.075299867461972 iteration: 17 weights: [0.31744324 0.52900257] vector norm: 0.6169391575773583 current error: 30.073252472741405 iteration: 18 weights: [0.31745224 0.52893155] vector norm: 0.6168829006509288 current error: 30.072316640723134 iteration: 19 weights: [0.31746143 0.52888364] vector norm: 0.6168465514539311 current error: 30.0718861803421 iteration: 20 weights: [0.31747074 0.52885132] vector norm: 0.6168236248664436 current error: 30.071685487019835 iteration: 21 weights: [0.31748013 0.52882949] vector norm: 0.6168097485077322 current error: 30.071589257220854 iteration: 22 weights: [0.31748957 0.52881476] vector norm: 0.616801974365135 current error: 30.071540521731656 iteration: 23 weights: [0.31749906 0.52880479] vector norm: 0.6167983147497292 current error: 30.07151337951737 iteration: 24 weights: [0.31750857 0.52879805] vector norm: 0.6167974294520366 current error: 30.07149605468038 iteration: 25 weights: [0.31751809 0.52879348] vector norm: 0.6167984148216327 current error: 30.07148319331257 iteration: 26 weights: [0.31752763 0.52879038] vector norm: 0.6168006615593393 current error: 30.071472361261364 iteration: 27 weights: [0.31753717 0.52878826] vector norm: 0.616803758834994 current error: 30.071462451839917 iteration: 28 weights: [0.31754672 0.52878681] vector norm: 0.6168074296388016 current error: 30.07145296189346 iteration: 29 weights: [0.31755627 0.5287858 ] vector norm: 0.6168114871914999 current error: 30.07144366266287 iteration: 30 weights: [0.31756583 0.5287851 ] vector norm: 0.6168158055533827 current error: 30.07143445014285 iteration: 31 weights: [0.31757539 0.5287846 ] vector norm: 0.6168202998069839 current error: 30.07142527704745 iteration: 32 weights: [0.31758494 0.52878424] vector norm: 0.6168249126949067 current error: 30.071416121878023 iteration: 33 weights: [0.3175945 0.52878398] vector norm: 0.6168296056101267 current error: 30.071406974860405 iteration: 34 weights: [0.31760406 0.52878377] vector norm: 0.6168343525210258 current error: 30.071397831550584 iteration: 35 weights: [0.31761362 0.52878361] vector norm: 0.6168391358752219 current error: 30.071388689928252 iteration: 36 weights: [0.31762318 0.52878348] vector norm: 0.6168439438376392 current error: 30.071379549074727 iteration: 37 weights: [0.31763274 0.52878336] vector norm: 0.61684876842822 current error: 30.0713704085725 iteration: 38 weights: [0.3176423 0.52878326] vector norm: 0.6168536042662351 current error: 30.071361268231524 iteration: 39 weights: [0.31765186 0.52878317] vector norm: 0.6168584477236104 current error: 30.071352127965643 iteration: 40 weights: [0.31766142 0.52878308] vector norm: 0.6168632963540365 current error: 30.071342987735534 iteration: 41 weights: [0.31767098 0.528783 ] vector norm: 0.6168681485080346 current error: 30.07133384752333 iteration: 42 weights: [0.31768054 0.52878292] vector norm: 0.6168730030734078 current error: 30.07132470732093 iteration: 43 weights: [0.3176901 0.52878284] vector norm: 0.6168778593002311 current error: 30.071315567124664 iteration: 44 weights: [0.31769966 0.52878276] vector norm: 0.6168827166828511 current error: 30.07130642693288 iteration: 45 weights: [0.31770922 0.52878269] vector norm: 0.6168875748803186 current error: 30.071297286744738 iteration: 46 weights: [0.31771878 0.52878261] vector norm: 0.6168924336627406 current error: 30.071288146559965 iteration: 47 weights: [0.31772834 0.52878254] vector norm: 0.6168972928751059 current error: 30.07127900637834 iteration: 48 weights: [0.3177379 0.52878246] vector norm: 0.6169021524128935 current error: 30.071269866199884 iteration: 49 weights: [0.31774746 0.52878239] vector norm: 0.6169070122056274 current error: 30.071260726024477 iteration: 50 weights: [0.31775702 0.52878231] vector norm: 0.6169118722057866 current error: 30.071251585852142 iteration: 51 weights: [0.31776658 0.52878224] vector norm: 0.616916732381328 current error: 30.071242445682856 iteration: 52 weights: [0.31777614 0.52878216] vector norm: 0.6169215927106454 current error: 30.07123330551661 iteration: 53 weights: [0.3177857 0.52878209] vector norm: 0.6169264531791692 current error: 30.071224165353485 iteration: 54 weights: [0.31779526 0.52878201] vector norm: 0.6169313137770746 current error: 30.071215025193343 iteration: 55 weights: [0.31780482 0.52878194] vector norm: 0.6169361744977363 current error: 30.0712058850363 iteration: 56 weights: [0.31781438 0.52878186] vector norm: 0.6169410353366862 current error: 30.071196744882265 iteration: 57 weights: [0.31782394 0.52878179] vector norm: 0.6169458962909106 current error: 30.0711876047313 iteration: 58 weights: [0.3178335 0.52878171] vector norm: 0.6169507573583767 current error: 30.07117846458336 iteration: 59 weights: [0.31784306 0.52878164] vector norm: 0.6169556185377126 current error: 30.07116932443854 iteration: 60 weights: [0.31785262 0.52878157] vector norm: 0.6169604798279933 current error: 30.071160184296716 iteration: 61 weights: [0.31786218 0.52878149] vector norm: 0.6169653412285931 current error: 30.071151044158 iteration: 62 weights: [0.31787174 0.52878142] vector norm: 0.6169702027390902 current error: 30.071141904022262 iteration: 63 weights: [0.3178813 0.52878134] vector norm: 0.6169750643591987 current error: 30.071132763889636 iteration: 64 weights: [0.31789086 0.52878127] vector norm: 0.6169799260887253 current error: 30.071123623760045 iteration: 65 weights: [0.31790042 0.52878119] vector norm: 0.6169847879275386 current error: 30.071114483633497 iteration: 66 weights: [0.31790998 0.52878112] vector norm: 0.6169896498755493 current error: 30.071105343510006 iteration: 67 weights: [0.31791954 0.52878104] vector norm: 0.6169945119326963 current error: 30.07109620338955 iteration: 68 weights: [0.3179291 0.52878097] vector norm: 0.6169993740989372 current error: 30.071087063272184 iteration: 69 weights: [0.31793866 0.52878089] vector norm: 0.6170042363742433 current error: 30.071077923157834 iteration: 70 weights: [0.31794822 0.52878082] vector norm: 0.6170090987585936 current error: 30.07106878304657 ```
Gradient vector starts to increase at some point, gradient descent from scratch
CC BY-SA 4.0
null
2023-04-02T16:34:56.057
2023-04-02T19:34:39.410
null
null
148568
[ "python", "linear-regression", "gradient-descent" ]
Keep track of the delta weight, not the weight itself. ``` def gradient_descent(x, y): errors = [] eps = 1e-7 step = 0.001 current_error = 0 weights = np.random.rand(1,2)[0] new_weights = np.zeros(2) delta_weights = weights.copy() L = len(x) t = 0 while np.linalg.norm(delta_weights) > eps: # if t > 40: # break print(t, weights, np.linalg.norm(delta_weights), current_error) prediction = weights[0]+weights[1]*x current_error = squared_error(prediction, y) errors.append(current_error) gradient = gradient_step(x, y, prediction) new_weights[0] = weights[0] - step * 2/L * gradient[0] new_weights[1] = weights[1] - step * 2/L * gradient[1] delta_weights[0] = new_weights[0] - weights[0] delta_weights[1] = new_weights[1] - weights[1] weights[0] = new_weights[0] weights[1] = new_weights[1] t+=1 return t, weights x = np.random.randn(100) * 5 y = 2 + 3 * x + np.random.randn(100) import matplotlib.pyplot as plt plt.scatter(x, y) plt.show() gradient_descent(x, y) ```
Gradient descent with infinite gradient value
> Is the method replacing infinitely large values to some large but finite values correct? Yes. For example, the same problem happens for the logarithm in cross-entropy loss function, i.e. $p_i \text{log}(p'_i)$ when $p'_i \rightarrow 0$. This is avoided by replacing $\text{log}(x)$ with $\hat{\text{log}}(x) = \text{log}(x+\epsilon)$ for some small $\epsilon$. Similarly, you are changing $f(x)$ in the denominator to $\hat{f}(x) = max(\epsilon, f(x))$. However, I would suggest $\hat{f}(x) = f(x) + \epsilon$ instead of a cut-off threshold. This way, the difference in $f(x_1) < f(x_2) < \epsilon$ would not be ignored unlike the max cut-off.
120723
1
120760
null
0
37
I'm working with a colleague concurrently between R and MS Excel looking at credit risk scorecard modelling. In Excel he has calculated what he says is the gini coefficient for certain variables, which he has calculated by ranking the variable from lowest to highest, calculating the cumulative number of insolvencies, cumulative population, and using these to calculate a "width of the ranking" and ultimately the area explained by the variable. The model is a simple logistic regression where I can add more variables or different variables depending on what people ask about. `mylogit <- glm(insolvency ~ LogPnL, data=my_data, family-"binomial")` However, in the Excel document the output from the model isn't used in the above calculations. I researched how to calculate the gini coefficient in R and ended up calculating the AUC of a ROC curve like so: ``` # Full Model predicted <- predict(mylogit, my_datafs, type="response") #calculate AUC aucc <- auc(my_datafs$Insolve,predicted) gin <- 2*aucc-1 giin <- gin/(1-0.006059979) #where 0.006059979 is the insolvency rate print(giin) ``` And this gives an entirely different number to what my colleague gets (for instance, I may get 0.6% whilst he gets 30%). I also tried a few other approaches: ``` library(WVPlots) WVPlots::GainCurvePlot(my_datafs,"LROC","Insolve",title="Test Plot") ``` and ``` roc(my_datafs$Insolve ~ mylogit$fitted.values, plot=TRUE, legacy.axes = TRUE) ``` I seem to always get the same values using these approaches, but this is entirely different to what my colleague has calculated. So I asked him if this "gini coefficient" calculation has another name as when research it I only got the ROC and AUC stuff, and things about the Lorenz curve and economics. He suggested looking into gains tables/lift charts. I also looked into this and followed this site [here](https://www.r-bloggers.com/2021/09/measuring-model-performance-using-a-gains-table/) but this does not work for me at all and just gives constant level values. So my question is, does anyone know what my colleague is calculating and how I can do this in R and verify what has been done? The data looks something like this (where 1s represent insolvency in column a): ``` Insolvency LogPnL LogAssets LogReturnoncapital 0 13.45244524 17.26029721 -4.555781778 0 -13.16158409 17.26053342 -0.610391211 0 15.33151653 17.26059723 -4.62544939 0 15.24483998 17.26060402 -1.08183692 0 -12.40954396 17.26068645 -3.763048412 0 15.17672144 17.26070709 -1.438018097 0 15.16098292 17.26075672 -1.438018097 0 15.21341303 17.26084054 -4.852438172 0 15.62576461 17.26085241 -1.911767818 0 15.13992952 17.26094809 -2.296309704 0 15.1798149 17.26094809 -0.742112526 0 15.94790027 17.26094809 -1.719503458 0 15.44470345 17.26105944 -0.890755178 1 -15.53863423 17.26107564 -0.779659645 1 14.64142528 17.26116973 -2.536352638 0 -14.06471164 17.2611713 -4.707113261 0 15.37648401 17.26119409 -1.812813986 0 15.43226742 17.26123242 -1.245680522 0 14.11857373 17.26123506 -3.67956894 1 14.25847374 17.26129203 -22.89380415 0 -14.48845503 17.26129882 -0.3949376 0 13.635187 17.26129882 -4.97512426 0 14.88228812 17.26129882 -1.299654895 0 13.46595308 17.26136258 -4.948858859 0 15.6823775 17.26142633 -0.976068273 0 12.80490915 17.26145821 -2.103263152 0 14.80132735 17.26149008 -6.06110278 0 14.94400522 17.26152196 -2.778127905 0 15.07907215 17.26152196 -6.098750561 ```
Model/variable gini?
CC BY-SA 4.0
null
2023-04-05T09:09:15.160
2023-04-06T18:28:59.653
null
null
148668
[ "regression", "r", "finance", "gini-index" ]
Unfortunately, there are multiple measures called gini coefficients or gini index and they are used for different things in different domains. So you are for sure not the first person to face this problem. Luckily, for your domain it is clear what coefficient to use. ##### Gini coefficient for (credit risk) scorecards The gini coefficient to evaluate the predictive power of a credit risk scorecard is given by $$gini = 2*rocauc-1$$ This is (one of) the standard measure for evaluating credit risk scorecards, so this should be the one your colleague is calculating. So for your code, just remove the line `giin <- gin/(1-0.006059979)` from your code and use `gin` and you should be fine. Disclaimer: I am no expert for R, but if the code does, what it implies, then the change should be enough. If your colleagues values that strongly from yours, he probably does not compute the gini coefficient that is common for credit risk scorecards. ##### Some Background Both, roc-auc and gini are measures that evaluate the order of the scores, not their actual value. So there should be no difference whether you use the linear term of the logistic regression as score or the logistic mapping into probabilities. The roc-auc can be interpreted as the probability that a random insolvency case gets a higher risk score than a random non-insolvency case. This means that - a perfect model, where all insolvency cases get a higher score, gets a roc-auc of 100%. - a model that assigns random scores independent of any feature or information gets a roc-auc of 50%. - a model that often estimates the risk of insolvency cases below the one of non-insolvency cases might be below 50%. Similarly, the gini assumes that all models are better than random scores. Hence the baseline of a random score is set to 0% gini and a perfect model gets a gini of 100%. If it happens that the gini is below 0, swapping the scores (e.g. `new_score = 1 - old_score`) will remove the negative sign of the gini.
What exactly is a Gini Index
A class is simply a label you use to categorize a bunch of objects. For example, if you were trying to create an email filter, you might have a `spam` class and `non-spam` class. A Gini index is used in decision trees. A single decision in a decision tree is called a node, and the Gini index is a way to measure how "impure" a single node is. Suppose you have a data set that lists several attributes for a bunch of animals and you're trying to predict if each animal is a mammal or not. You would have two classes, `mammal`, and `not-mammal`. You start making your decision tree by asking if an animal is warm blooded or not and split your data set into two groups based on this splitting criteria. If an animal is cold blooded, it belongs to the `not-mammal` class, however, if an animal is warm-blooded, it may or may not belong to the `mammal` class. This new node (e.g., decision) might contain a mix, or group, of animals that may or may not be mammals (i.e., the group could contain birds and mammals). A 50/50 split between `mammal`s and `non-mammal`s at this node would mean the node is impure (with a Gini index of 0.5). A completely pure node would have a Gini index of 0 and would indicate a node is made up of only 1 class.
120726
1
120979
null
1
22
I'm trying to build GPT2 from scratch. I understand how to convert each word in a sentence to its respective token index and each token is then converted to its respective word embedding vector. I also understand there needs to be a fixed length for each input vector e.g. the max length of all sentences input into the transformer are 50 tokens, and for all sentences shorter than that padding token vectors consisting of nothing but zeroes fill the space where the additional word vectors would be. I get that each input vector needs to have a start token at the beginning of the input vector, as well as a stop token after the last word and before the padding vectors. The integer values corresponding to the start and stop token indexes are somewhat arbitrary, but I still don't understand what the actual values of the start and stop token embeddings should be. Should they just also be vectors of zeroes? Are these values also arbitrary?
What should the numerical values of the <startofsentence> and <endofsentence> token vectors be?
CC BY-SA 4.0
null
2023-04-05T11:53:38.800
2023-04-17T14:29:40.360
2023-04-17T14:08:09.357
13165
13165
[ "nlp", "word-embeddings", "transformer", "gpt", "tokenization" ]
As commented by @Erwan, the start/end of sequence tokens are like any other token: they are part of the embedding table and they are identified by their index to that table. The vector values of the start/end tokens in the embedding table are learned during the training of the network, like the other tokens.
What is purpose of the [CLS] token and why is its encoding output important?
CLS stands for classification and its there to represent sentence-level classification. In short in order to make pooling scheme of BERT work this tag was introduced. I suggest reading up on this [blog](https://datasciencetoday.net/index.php/en-us/nlp/211-paper-dissected-bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-explained) where this is also covered in detail.
120727
1
120742
null
0
50
I have data that look like this: ``` Time Rain1Hour Rain6Hour 0 0 NaN 1 1 NaN 2 1 NaN 3 1 NaN 4 1 NaN 5 1 NaN 6 1 NaN 7 0 NaN ``` Where Rain1Hour is the rain in the last hour and Rain6Hour is the accumulated rain in the last 6 hours, which means I want the sum of the rain in the last 6 hours using the data from the Rain1Hour column. How do I fill the column Rain6Hour with the data from Rain1Hour. I want it to be like: ``` Rain6Hour 0 1 2 3 4 5 6 5 ``` For example, the fourth row is 3 because it has been raining a quantity of 1 each hour the previous 3 hours and a quantity of 0 in the hour 0. I am using Python and the data is in a Pandas dataframe. EDIT: After solving the question using the rolling function that lcrmorin mentioned, I have now another one closely related to this. Is it possible to only sum over some specific rows? For example, if I am currently in time 6, I want to sum the value of the rows time=6, time=6-2, and time = 6-4, of the column Rain1Hour and assign it to another column.
Fill NaN values with values from another column
CC BY-SA 4.0
null
2023-04-05T12:09:25.550
2023-04-05T23:14:39.110
2023-04-05T15:43:29.847
148672
148672
[ "python", "pandas", "data-cleaning" ]
Use the [rolling](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.rolling.html) function together with the [sum](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.sum.html) function, as such: ``` df['Rain6Hour'] = df['Rain1Hour'].rolling(min_periods=1, window=6).sum() ```
Fill missing values(NaN) based on the previous row that contains a specific value?
You can `groupby` by `'stock'`, forward fill missing values with `ffill` and use its result in `fillna`. For example: ``` date stock price 22/12/20 MSFT 87 22/12/20 AAPL 99 22/12/20 FCA 81 23/12/20 MSFT NaN 23/12/20 AAPL 100 23/12/20 FCA 80 24/12/20 MSFT 91 24/12/20 AAPL NaN 24/12/20 FCA NaN df.fillna(df.groupby('stock').ffill()) ``` Result: ``` date stock price 0 22/12/20 MSFT 87.0 1 22/12/20 AAPL 99.0 2 22/12/20 FCA 81.0 3 23/12/20 MSFT 87.0 4 23/12/20 AAPL 100.0 5 23/12/20 FCA 80.0 6 24/12/20 MSFT 91.0 7 24/12/20 AAPL 100.0 8 24/12/20 FCA 80.0 ```
120759
1
120762
null
0
24
I have a years worth of electricity power data on 15 minute intervals joined with weather data and time-of-week one hot dummy variables. Is using train/test split an okay approach for validating the model? Am attempting to predict electricity with explainer variables like weather and time-of-week dummies. For starters, I weeded out a bunch of dummy variables variables with OLS regression in statsmodels and then attempted to fit the model with XG Boost. Would anyone have some tips for a better approach on fitting time series data, validate the ML model, and then attempting to use regression to predict electricity? Some of my Python code for the ML training process: ``` # shuffle the DataFrame rows df2 = df2.sample(frac=1) train, test = train_test_split(df2, test_size=0.2) regressor = XGBRegressor() X_train = np.array(train.drop(['total_main_kw'],1)) y_train = np.array(train['total_main_kw']) X_test = np.array(test.drop(['total_main_kw'],1)) y_test = np.array(test['total_main_kw']) regressor.fit(X_train, y_train) predicted_kw_xgboost = regressor.predict(X_test) y_test_df = pd.DataFrame({'test_power':y_test}) y_test_df['predicted_kw_xgboost'] = predicted_kw_xgboost y_test_df.plot(figsize=(25,8)) ``` Will plot trained model predicting the test dataset but I have not done any verification if the data is stationary or not: [](https://i.stack.imgur.com/NgVeb.png) ``` mse = mean_squared_error(y_test_df['test_power'], y_test_df['predicted_kw_xgboost']) print("MEAN SQUARED ERROR: ",mse) print("ROOT MEAN SQUARED ERROR: ",round(np.sqrt(mse),3)," IN kW") MEAN SQUARED ERROR: 4.188126272978789 ROOT MEAN SQUARED ERROR: 2.046 IN kW ``` Thanks any tips still learning in this area..
Validating ML regression model and predictions
CC BY-SA 4.0
null
2023-04-06T17:26:23.517
2023-04-06T19:48:28.783
null
null
66386
[ "machine-learning", "python", "time-series", "regression", "xgboost" ]
- Ideally, your approach for validating the model should be a train-validation-test approach. i.e. the model is trained on the training data, the results of the model are then validated on a set of validation data that is segregated from the training data (e.g. a 70/30 split), and then the model is tested on previously unseen data. This last point is important – the test data must be completely separate from that used to train and validate the model. Otherwise, data leakage occurs and overfitting results – whereby the model gives a falsely high gauge of accuracy and will go on to perform poorly on real-world data. - The time series gives the preliminary appearance of stationarity, but a more formal test such as KPSS could be used to verify this. - The root mean squared error of 2.046 kw does not mean anything by itself. Rather, one should obtain the mean value in the test set, and then compare this to the root mean squared error. For instance, if the mean of the test set is 200 kw, then the RMSE is quite low in comparison indicating strong predictive performance. However, if the mean of the test set were to be 1 kw, then the RMSE is conversely quite high in comparison – indicating weak predictive performance.
Predicting the likelihood that a prediction from a linear regression model is accurate
In case you are talking about providing certain interval to your predictions, what you might need is adding some confidence interval to your linear regression predictor, something which you can make via a resampling method like bootstrapping as a robust way to find predictions intervals. One key advantage is that it does not assume any kind of distribution, being a distribution-free method to find your predictions and, if needed, to your regression coefficients estimates. The steps would be: - Draw n random samples (with replacement) from your dataset, where n is the bootstrap sample size - Fit a linear regression on the bootstrap sample from step 1 and predict a value - Take a single residual at random from the original regression fit, add it to the predicted value and save the result. - Repeat 1 to 3 steps several times (1k times for instance) - Find the desired percentiles of your interval (2.5th to 97.5th for instance) Source of info in this book On the other hand, if you mean providing a generic confidence metric value for your model, you should find for instance a [MSE](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_squared_error.html#sklearn.metrics.mean_squared_error) or [MSA](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.mean_absolute_error.html#sklearn.metrics.mean_absolute_error) on a test set.
120764
1
120766
null
5
2076
I keep reading about how the latest and greatest LLMs have billions of parameters. As someone who is more familiar with standard neural nets but is trying to better understand LLMs, I'm curious if a LLM parameter is the same as a NN weight i.e. is it basically a number that starts as a random coefficient and is adjusted in a way that reduces loss as the model learns? If so, why do so many researches working in the LLM space refer to these as parameters instead of just calling them weights?
How does an LLM "parameter" relate to a "weight" in a neural network?
CC BY-SA 4.0
null
2023-04-06T21:53:54.100
2023-05-21T03:04:09.467
2023-05-20T02:03:06.380
2474
148720
[ "machine-learning", "nlp", "terminology", "gpt" ]
Yes, the parameters in a large language model (LLM) are similar to the weights in a standard neural network. In both LLMs and neural networks, these parameters are numerical values that start as random coefficients and are adjusted during training to minimize loss. These parameters include not only the weights that determine the strength of connections between neurons but also the biases, which affect the output of neurons. In a large language model (LLM) like GPT-4 or other transformer-based models, the term "parameters" refers to the numerical values that determine the behavior of the model. These parameters include weights and biases, which together define the connections and activations of neurons within the model. Here's a more detailed explanation: - Weights: Weights are numerical values that define the strength of connections between neurons across different layers in the model. In the context of LLMs, weights are primarily used in the attention mechanism and the feedforward neural networks that make up the model's architecture. They are adjusted during the training process to optimize the model's ability to generate relevant and coherent text. - Biases: Biases are additional numerical values that are added to the weighted sum of inputs before being passed through an activation function. They help to control the output of neurons and provide flexibility in the model's learning process. Biases can be thought of as a way to shift the activation function to the left or right, allowing the model to learn more complex patterns and relationships in the input data. The training process involves adjusting these parameters (weights and biases) iteratively to minimize the loss function. This is typically done using gradient descent or a variant thereof, such as stochastic gradient descent or Adam optimizer. The loss function measures the difference between the model's predictions and the true values (e.g., the correct next word in a sentence). By minimizing the loss, the model learns to generate text that closely resembles the patterns in its training data. Researchers often use the term "parameters" instead of "weights" to emphasize that both weights and biases play a crucial role in the model's learning process. Additionally, using "parameters" as a more general term helps communicate that the model is learning a complex set of relationships across various elements within the architecture, such as layers, neurons, connections, and biases.
How is weight matrix calculated in a neural network?
So I think there are a few concepts being mixed up in your question, I will do my best to address them one by one. The "weight matrix" you refer to (if it's the same one as Aneesh Joshi's post) is related to "attention", a concept in Deep Learning that, in simple terms, changes the priority given to different parts of the input (quite literally like human attention) depending on how it is configured. This is not necessarily a the weight of a single neuron in a neural network, so I will come back to this once I have cleared up the other concepts. To understand Transformers and NLP models, we should go back to just basic neural networks, and to understand basic neural networks we should understand what a neuron is, how it works, and how it is trained. History break: basically for a long time (in particular popularity since the 1800s) a lot of modelling was done using linear models and linear regression (you see the effects of this in classical statistics/econometrics). However, as the name implies, this only captures linear relationships. There were two questions that led to resolving this: (1) how do we capture non-linear relationships? (2) how do we do so automatically in an algorithmic manner? For the first one, many approaches existed, such as Logistic regression. You might notice that in logistic regression, there is still a linear equation that is being estimated, just not as the final output. This disconnect between the input linear equation and output non-linear equation is now known as having an input Linear cell with an Activation function. In logistic regression, the Activation function is the logistic function, whereas in linear regression it is just the Identity function. In case it is still not clear, imagine this: Input -> Linear Function -> Activation Function -> Output. Putting that entire sequence together gives you the Perceptron (first introduced in the 1940s). Methods of optimizing this were done via gradient descent and various algorithms. Gradient descent is probably the most important to keep in mind and helps us answer the second question. Essentially what you are trying to do in any form of automated model development, is you have some linear equation (i.e. $y = \beta\times w + \beta_{0}$), which you feed an input, pass through an activation function (i.e. Identity, sigmoid, ReLu, tanh, etc), then get an output. And you want that output to match a value that you already know (in statistics imagine $y$ and $y_{hat}$. In order to do that (at least in the case of a continuous target class) you need to know how far off is your prediction from the true value. You do this by taking the difference. However the difference can be positive/negative, so usually we have some way of making this semi-definite positive by either squaring (Mean Squared Error) or taking the absolute value (Mean Absolute Error). This is known as our loss function. Essentially we would have developed a good estimator/predictor/model if our loss is 0 (which means that our prediction matches our target value). Linear regression solves this using Maximum Likelihood Estimation. Gradient descent effectively is an iterative algorithm that does the following. First we take the derivative of our loss function with respect to one of our model weights (i.e. w) and we apply something similar to the update that happens in the Newton-Raphson method, namely $w_{new} = w_{old} - step\times\frac{dL}{dw}$ where step is a step-size (either fixed or changing). A stopping condition is typically if your change from the new weight to the old weight is sufficiently small you stop updating as you have reached a minimum. In a convex scenario this would also be your global minimum, however classically you have no way of knowing this, which is why multiple algorithms use some approaches like random starts (multiple starting points randomly generated) to try and avoid getting stuck in local minima (aka a loss value that is low but not the lowest it could be). Ok so if you have read that take a quick stretch and process it since I am not entirely sure of the reader's background so it may have been a lot to process or fairly straightforward. So far we covered how to capture non-linear relationships and how to do it in an automated way. So does a single neuron train like that? Yes. Do a collection of neurons train like that? No. Recall that a neuron is just a linear function with an activation function on top that performs some form of gradient descent. However a neural network is a chain of these, sometimes interconnecting, etc. So then how do we get a derivative of the loss which is at the end of a network to the very beginning. Through a technique that took off in the mid-1980s the technique called backpropagation (as a rediscovery of techniques dating back to the 1960s). Essentially we take partial derivatives, and then through an application of the chain rule are easily able to propagate various portions of the loss backwards, updating our weights along the way. This is all done in a single pass of training, and thankfully, automatically. The classical approach is to feed in your entire dataset THEN take the loss gradient, known as Gradient Descent. Or you can feed only a single data point before updating, known as Stochastic Gradient Descent. There is, of course a middle ground, thanks to GPUs, taking a batch of points known as Batch learning. A question that might have come up is, is the derivative of our loss function just linear? No, because the activation can be non-linear, so essentially you are taking the derivatives of these non-linear functions, and that can be computationally expensive. The popular choice today is ReLU (Rectified Linear Unit) which, for some linear output $y$ is basically a max function saying $output = max(y,0)$, that's it. Things like weight initialization make more of an impact as performance across different non-linear activation functions are pretty comparable. Ok so we covered how a neuron works and how a neural network optimizes its weights in a single pass. A side note is you will often hear "train/validation/test" set, which are basically splits of your dataset. You split your dataset prior into a training subset, for, well, training, which is where you modify the weights through the process I described above for the entire training set or each data point (or batches of data points if you are using GPUs/batches in Deep Learning). Usually your data might need to be transformed depending on the functions you are using, and the mistake most practitioners make is pre-processing their data on the entire dataset, whereas the statistically correct way of doing so is on the training set only, and extrapolating to the validation/test set using that (since in the real world you may not have all the information). The validation set is there for you to test out different hyper-parameters (like step-size above, otherwise known as learning rate) or just compare different models. The test set is the final set that you use to truly see the quality of your model, after you have finished optimizing/training above. Now we can finally get to your question on attention. As I described a basic neural network, you may have noticed that it receives an input once, does a run through, and then optimizes. Well what if we want to get inputs that require some processing? Like images? Well this is where different architectures come up (and you can read this all over the web, I recommend D2L.ai or FastAI's free course), and for images the common one are Convolutional Neural Networks, which were useful in capturing reoccurring patterns and spatial locality. Sometimes we might want more than one input, aka a previous input influencing how we process this next input, this is where temporal based architectures come in like Recurrent Neural Networks, and this was what was initially used for languages, but chaining a bunch of neurons takes a while to process since we can't parallelize the operations (which is where the speed of neural network training comes from). Plus you would have to have some way of pre-processing your language input, such as converting everything to lowercase, removing non-useful words, tokenizing different parts of words, etc, depending on the task. There was quite a lot to deal with. Up until a few years ago when Attention based models came out called Transformers, and they have pretty much influenced all aspects of Deep Learning by allowing parallelization but also capturing interactions between inputs. A foundational approach is to pre-process inputs using an attention matrix (as you have mentioned). The attention matrix has a bit of complexity in terms of the math (Neuromatch's course covers this well), but to simplify it, it is effectively the shared context between two inputs (one denoted by the columns, the other the rows), like two sentences. The way this is trained (and it depends on the model) is generally by taking an input and converting it into a numerical representation (otherwise known as an embedding) and outer-multiplying these two vectors to produce a symmetric matrix, which obviously has the strongest parts on the main diagonal. The idea here is to then zero out the main diagonal, and then train the network to try and use the remaining cells to fill this main diagonal in (you can once again do this via a neural network and the process described above). Then you can apply this mechanism with two different inputs. In a translation task for example, you would have a sentence in language A and another in B, and their matrix would highlight the shared context, which may not be symmetric depending on the word order/structure, etc. Transformers are not like Recurrent Neural Networks in that they take an input at once and then give an output, but compromises exist and pre-training models is also a thing. There is a lot I haven't touched upon, but the resources I mentioned should be a useful starting point, and I wish you luck in your Data Science journey!
120771
1
120783
null
0
45
was hoping to ask how to approach setting up a train test split for a dataset that is provided in two separate .csv files: one as the "train" dataset, and the other as the "test" dataset. I've been taught to utilize sklearn's train_test_split for usually one dataset that goes ahead and splits it into the respective X train/test, y train/test, but I can't seem to find any documentation on the approach if the datasets are fed in as two separate data frames. Would the best approach be the merge the two back together and then apply train_test_split? Thanks for the help in advance!
Train Test Split datasets that are provided as a training set and test set
CC BY-SA 4.0
null
2023-04-07T02:31:10.670
2023-04-07T13:07:40.623
null
null
148728
[ "scikit-learn" ]
In cases where the split is already defined (e.g. by two files or by an extra column), you do not need to apply `train_test_split`, just use the given split. For you this would look something like that (assuming you have a function `load_dataset`: ``` X_train, y_train = load_dataset("train.csv") X_test, y_test = load_dataset("test.csv") ``` Nevertheless, I might be a good idea to perform cross-validation on the training dataset for hyper-parameter optimization.
How do I split a data set into train and test sets using
The sample size should be on the first axis. i.e. $X.shape = (m, n_x) $ $Y.shape = (m, 1) $ You can simply transpose the data.
120842
1
120950
null
1
39
I know that it is not OK to have too similar data in the train and test set (for example two pictures that differ by only one pixel). I'm trying to find a scientifically valid explanation why it is bad, I mean a paper in a peer-reviewed journal explaining (or even mentioning) this. Couldn't find anything appropriate for several hours. Do you know any reliable source?
Data redundancy between train and test dataset - why is it bad (source needed)
CC BY-SA 4.0
null
2023-04-11T07:52:35.933
2023-04-16T04:39:21.617
2023-04-16T04:39:21.617
83275
53846
[ "machine-learning", "data", "training" ]
There are several reasons why having too similar data in the train and test set is not recommended. One reason is that it can lead to overfitting, where the model performs well on the training data but poorly on the test data, because it has essentially memorized the training data instead of learning generalizable patterns. This is especially true if the training and test data are very similar, as the model may not be able to generalize to new, unseen data. Another reason is that it can give an overly optimistic estimate of model performance, because the model is being tested on data that is very similar to the training data. This is not representative of the true performance of the model on new, unseen data. While there may not be a specific paper addressing the exact scenario you described (two pictures that differ by only one pixel), there are several reasons why having too similar data in the train and test set is generally not recommended in machine learning. A paper that discusses the importance of representative sampling in machine learning is The Importance of Encoding Versus Training with Sparse Coding and Vector Quantization by Adam Coates, Andrew Y. Ng, and Honglak Lee, published in the Journal of Machine Learning Research (JMLR) in 2011. While this paper does not specifically address the scenario you described, it does emphasize the importance of representative training and test data for accurate machine learning performance evaluation.
test data is not a good representation of train data
Here's is my attempt to answer these questions: - Avoid taking any insights from test_data. Do changes WRT the insights taken from the train_set only. However, every change I make should be replicated in test_set as well: The second part is mostly true. Only in some rare cases where you may be applying class balancing techniques to the train set which should not be applied to the test/evaluation set. Test set should represent the real world data as much possible, while, train set should do the same, some tweaks to this data set may be applied independently to make it easy/possible to train with the chosen ML algorithm. However, any scaling/normalization/transformation parameters must be replicated exactly to the test set as well. - Use test_data too for insights and do preprocess accordingly. There should be no harm in conducting an EDA on both train and test datasets and using the combined knowledge for taking feature engineering decisions. - Do something to change the test_data to make it more balanced and representative. NO. As noted in the #1 above, the test data should represent the real world data as much as possible. How else would you rely on the test set's evaluation metrics for the unseen real world data? You can read some more about it here Should I balance the test set when i have highly unbalanced data? - Somethings else :S You should not be worried if a category/class of a certain feature is missing from the test set. However, the reverse is not true: there must not be any new category/class in the test or real world data set which was absent from the train set - the preprocessing routine and model will not know how to handle it. Also, if there are a very low number of unique values or near zero variance in a feature, you should consider dropping it (from both train and test sets). In the end, this should reflect in low feature importance and highlighted by any of the feature selection procedures like this Feature selection
120870
1
120875
null
1
18
I have a dataset of 40K reddit posts in Italian, and I have a sentiment-based dictionary of 9K unique words and phrases, which classifies words into positive or negative. I would like to measure sentient per reddit post across time and I noticed that are several [methods](https://stackoverflow.com/questions/33543446/what-is-the-formula-of-sentiment-calculation) to compute sentiment per post. I am currently using the following equation: [](https://i.stack.imgur.com/Ag1PH.png) I wonder if it has any obvious downsides? The main advantage of course is interpretability, which straightforward with this method as it produces a score with a theoretical scale between −100 points (extremely negative) to 100 points (extremely positive).
Measuring sentiment using a dictionary-based model
CC BY-SA 4.0
null
2023-04-12T18:01:46.023
2023-04-14T10:54:02.790
2023-04-14T10:54:02.790
137378
137378
[ "machine-learning", "python", "deep-learning", "neural-network", "nlp" ]
The formula: $$\frac{\text{positive}-\text{negative}}{\text{total}}$$ is intuitive and simple. The only problem is that it assumes that all sentiment -related words are of equal strength. For example, the words good and great are both positive but do not express sentiment of the same strength. So one improvement would be to assign weights to sentiment words (in the dictionary) and adjust the formula to take that into account. $$\sum_i{p_i}-\sum_j{n_j}$$ $p_i$ is weight of positive word in text $n_j$ is weight of negative word in text
Build a sentiment model from scratch
What you're describing is indeed the traditional approach for building a sentiment analysis system, so I'd say it looks like a reasonable approach to me. I'm not up to date with the sentiment analysis task at all, but I think it would be worth studying the state of the art for several reasons: - There might be more recent, better approaches - There might be datasets in the languages you're interested in, and if there is that could save you a lot of time. Check if there are any shared tasks about this, they often provide annotated datasets.
120894
1
120896
null
0
22
I couldn't find any answer on that question, but does Python version really matter when you implement different ML models (like catboost or xgboost) or NN (like pytorch). I do know that these models were written on C++ and other programming languages and my question may sound quite trivial, but does it really matter to upgrade Python version to increase performance and use less Memory
Python version and performance in ML
CC BY-SA 4.0
null
2023-04-13T20:47:47.783
2023-04-13T21:23:02.337
null
null
143954
[ "machine-learning", "python", "neural-network", "memory" ]
The version of Python has minimal impact on performance relative to other factors (e.g., model architecture, framework choice, data size, and the number of epochs). Being concerned about the Python version is an example of premature optimization.
Efficient environment for machine or deep learning in Python
Try Kaggle, they have kernels with GPUs (free but limited 9 hours runtime I think) Google cloud is currently beta testing jupyter notebooks on their infrastructure. You do not need to spawn your own compute instance, or even start a Docker: you have your jupyter kernel in your browser in a matter of minutes ([https://cloud.google.com/ai-platform-notebooks/](https://cloud.google.com/ai-platform-notebooks/)). There runtime is illimited but if your connection is glitchy the notebook might disconnect and you can loose your data. But as a rule of thumb if your runtime is several hours, it might be better to prototype on a subset in notebook and then train on your full dataset in command line.
120948
1
120955
null
0
41
Can someone explain me dimensions in ASR? For example, if I have an audio, convert it to mel spectrogram and now I have a tensor of dimension [1, 128, 850]. Am I understand right that 128 - number of channels and if i will apply CNN for input, input channels will be equal to 128? And what is 850? For example, if I will apply transformer for input mel spectrogram, number of embeddings that I will pass to encoder is 850? Thank you in advance
Dimensions of mel spectrogram
CC BY-SA 4.0
null
2023-04-16T02:48:37.270
2023-04-16T09:08:13.207
2023-04-16T02:49:34.720
149008
149008
[ "deep-learning", "neural-network", "transformer", "audio-recognition" ]
In a mel-spectrogram there is only 1 channel (magnitude), and two spatial dimensions: n_mels mel bands, and T time-frames (depends on length of the audio). The order of the dimensions n the mel spectrogram may vary based on exactly which implementation you have used to generate it, which you have not specified. But as n_mels=128 is a common setting, and the `(n_mels, time_frames)` is the standard order in librosa, I assume that 850 is the number of time-frames in your example. Note that the order used by Convolution Neural Networks may also vary. For example in [keras Conv2d](https://keras.io/api/layers/convolution_layers/convolution2d/) the default data_format='channels_last'. Which means that your vector should be `(batch_size, n_mels, time_frames, 1)` instead of `(batch_size, 1, n_mels, time_frames)`.
Process melspectrograms with convolutional neural network
A mel-spectrogram for 1 second audio files should have dimensions of about 43x128 (time x frequency bands), when using the default settings in librosa. So if you got a 640x480 JPG file something sounds horribly wrong. Perhaps you are using a plot of the results instead of using the mel-spectrogram data?
121033
1
121053
null
1
40
I have a multivariate dataset for binary classification. Each series in the dataset has 1000 rows with eight features. The following is a description of a sample series in the dataset. The values in the table below are random and do not represent the actual data. ``` |----------------|----------|----------|----------| ... |----------| | Time (seconds) | Feature1 | Feature2 | Feature3 | ... | Feature8 | |----------------|----------|----------|----------| ... |----------| | 1 | 100 | 157 | 321 | ... | 452 | |----------------|----------|----------|----------| ... |----------| | 2 | 97 | 123 | 323 | ... | 497 | |----------------|----------|----------|----------| ... |----------| | ... | ... | ... | ... | ... | ... | |----------------|----------|----------|----------| ... |----------| | 1000 | 234 | 391 | 46 | ... | 516 | |----------------|----------|----------|----------| ... |----------| ``` We can consider each row to be logged every second. The training dataset is completely available offline. At the time of deployment, the data will appear in real-time, i.e., after the first second, only one row will be available, two rows will be available after two seconds, and so on. Typically, time series models provide the classification output at the end of the entire sample time series. However, I want to generate classification outputs online and periodically. For example, a classification output at every n seconds. I checked multiple relevant blogs, and it seems a sliding-window based LSTM model may fit the purpose. However, there are concerns about overfitting with such a model, as discussed here: [Sliding window leads to overfitting in LSTM?](https://datascience.stackexchange.com/questions/27628/sliding-window-leads-to-overfitting-in-lstm) Since my training examples are also limited, I am looking for alternative solutions. For instance, currently, I have about 100 training examples with 1000 x 8 rows each, 50 in each class. What are some other approaches to solving the problem?
How to perform classification on time-series data in real-time, at periodic intervals?
CC BY-SA 4.0
null
2023-04-19T11:25:24.703
2023-04-20T12:33:02.483
2023-04-19T11:45:12.437
149096
149096
[ "classification", "time-series", "lstm" ]
Time-series sequence classification is a set of problems where the model takes in a sequence and spits out a classification for that sequence. This is essentially your basic use case. Sequence classification is ideal for when the complete sequence is available. In your case, you can do the training process on the complete sequences. Upon deployment, take the sequence up to that point and simply get the prediction for the shorter sequence. RNNs like LSTMs and GRUs can take sequences of any length. Solely training on complete sequences might, for obvious reasons, not necessarily be optimal. Therefore, during training, you can also feed 'partial' sequences to train on and simply predict the shorter sequences. This mimics the behaviour that you will later get during deployment.
Finding an appropriate binary classification algorithm for time series data intervals
You could do logistic regression combined with an additional moving average/ clustering step. Combine your data into training rest and active data into a single array `X`, and have an additional array `y` which would represent the labels e.g. 0 - active / 1 - resting for each row in your training dataset. ``` from sklearn.linear_model import LogisticRegression clf = LogisticRegression(random_state=0).fit(X, y) ``` Then predict for each step whether it is resting or active. The predictions of the test set would look something like: ``` [0,1,1,1,0,1,1,0,0,0,0,1,0,0,0] ``` The additional step would be to find the groups. You could do this with a moving window like "if 5 active states in a window of 7, label it as active". You could also solve that final step with clustering, e.g. "if 0 is surrounded by 1s it is 0", or find local clusters of binary values.
121059
1
121076
null
0
93
``` import torch.nn.functional as F logits = torch.Tensor([0, 1]) counts = logits.exp() probs = counts / counts.sum() # equivalent to softmax loss = F.cross_entropy(logits, probs) ``` Here, `loss` is roughly equal to `0.5822`. However, I would expect it to be `0`. If I understand the docs correctly, [torch.nn.functional.cross_entropy](https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html) can accept an array of logits and an array of probabilities as its input and target parameters, respectively (converted to `pytorch.Tensor`'s). I believe `probs` to be the true distribution, and that `F.cross_entropy` therefore should return 0. Why is `loss` not `0`?
Why is the calculated cross-entropy not zero?
CC BY-SA 4.0
null
2023-04-20T16:21:40.213
2023-04-21T12:33:46.640
2023-04-20T16:22:32.590
149160
149160
[ "pytorch", "cross-entropy" ]
Pytorch treats your logits as outputs that it will first convert to probabilities by running them through a softmax: ``` p_log = torch.log(F.softmax(logits, dim=0)) torch.dot(p_log, probs) ``` tensor(0.5822) Some discussion here - [https://discuss.pytorch.org/t/why-does-crossentropyloss-include-the-softmax-function/4420](https://discuss.pytorch.org/t/why-does-crossentropyloss-include-the-softmax-function/4420). I think the naming could be clearer, for example in tensorflow it is tf.nn.softmax_cross_entropy_with_logits.
How does binary cross entropy work?
- When doing logistic regression you start calculating a bunch of probabilities $p_i$ and your target is maximize the product of those probabilities (as they're considered independent events). The higher the result of the product the better is your model. - As we are dealing with probabilities we are multiplying numbers between 0 and 1, therefore, if you multiply a lot of those numbers you would get smaller and smaller results. So we need a way to move from probabilities multiplication to a sum of other numbers. - Then is when $ln$ function enters in to play. We can use some of this function properties such as: $ln(a b) = ln(a) + ln(b)$. When our prediction is perfect i.e. 1, the $ln(1) = 0$. $ln$ lower than 0 are growing negative numbers e.g. $ln(0.9) = -0.1$ and $ln(0.5) = -0.69$. - So we can move from maximizing the multiplication of probabilities to minimizing the sum of the $-ln$ of those probabilities. The resulting cross-entropy formula is then: $$ - \sum_{i=1}^m y_i ln(p_i) + (1-y_i) log (1-p_i) $$ - If $y_i$ is 1 the second term of the sum is 0, likewise, if $y_i$ is 0 then the first term goes away. - Intuitively cross entropy says the following, if I have a bunch of events and a bunch of probabilities, how likely is that those events happen taking into account those probabilities? If it is likely, then cross-entropy will be small, otherwise, it will be big.
121078
1
121188
null
0
26
I'm working on a neural network model to predict the outcomes of horse races. To date, I've built a model in R using a multinomial logit model (similar to a logit model but with N outcomes, where N = no. of horses in a race). I've also carefully read Andrew Task's excellent Grokking Deep Learning book and learned the basics of PyTorch. I've now got to the point where I'm able to build a very simple neural network using only starting prices (i.e., odds for horses at start of race) as input and the net correctly works out that it should bet on the favourite. I'm using the architecture of 16 inputs (odds for up to 16 runners, set to zero if fewer than 16 runners in a race), 16 outputs (probability of horse winning a race), 1 hidden layer with \sqrt{16 \ times 16} nodes, a RELU activation function applied to the hidden layer, and a SOFTMAX activation function applied to the output layer. I apply argmax on the output layer to chose the winning horse. Based on my earlier analysis in R, betting on the favourite results in a win rate of 35.7% whereas betting on the horse chosen by my multinomial logit model results in a (lower) win rate of 23.4%, i.e., for now my model underperforms backing the favourite. I've been able to replicate the 35.7% figure using the neural network with the architecture described above (actually I undershoot this figure but I know how to change the architecture to exactly hit this figure). Surprisingly, however, when I swap out market price (which wouldn't really be available ahead of a race for betting purposes) and swap in exactly the same features I used in the multinomial logit model I manage to achieve a win rate of only about 17%, even if I train the model with 500 epochs. As I'm relatively knew to the world of neural networks, I've no idea how to go about tweaking the architecture or hyperparameters of the neural network to improve its performance such that it's at least able to match the performance of the classical statistical model I built earlier. (I'm making the bold assumption that a neural network should be able to do at least as well as a classical statistical model, provided the net is architected correctly.) Any pointers would be greatly appreciated! (FYI, this is a personal project to help me learn deep learning, and not any commercial enterprise.) In the plot below, confused-dust and absurd-universe refer to versions of the model with market prices as sole inputs whereas comic-wood and true-resonance refer to versions using the same set of features as in the multinomial model. Thank you! [](https://i.stack.imgur.com/kUkPT.png)
Neural network architecture for multinomial logit model
CC BY-SA 4.0
null
2023-04-21T14:08:14.160
2023-04-27T13:56:17.883
2023-04-21T14:11:42.447
149182
149182
[ "neural-network", "logistic-regression" ]
I eventually arrived at the solution below, which can be used to replicate the result of a multinomial logit regression: ``` class ParsLin(nn.Module): """ Parsimonious version of Linear """ def __init__(self, input_layer_nodes, output_layer_nodes): super().__init__() # Check if output_layer_nodes is an integer multiple of input_layer_nodes if input_layer_nodes % output_layer_nodes != 0: raise ValueError("inputt_layer_nodes must be an integer multiple of output_layer_nodes") self.input_size = input_layer_nodes self.output_size = output_layer_nodes self.coefficient_size = input_layer_nodes // output_layer_nodes weights = torch.zeros(self.coefficient_size) self.weights = nn.Parameter(weights) # nn.Parameter is a Tensor that's a module parameter. # Xavier (Glorot) initialization nn.init.xavier_uniform_(self.weights.view(1, self.coefficient_size)) def forward(self, x): # Reshape races tensor to separate features and horses n = x.shape[0] reshaped_input = x.view(n, self.coefficient_size, self.output_size) # Transpose tensor to have each horse's features together transposed_input = reshaped_input.transpose(1, 2) # Multiply transposed tensor with coefficients tensor (broadcasted along last dimension) marginal_utilities = transposed_input * self.weights # Sum multiplied tensor along last dimension utilities = marginal_utilities.sum(dim=-1) return utilities class MLR(nn.Module): """ Parsimonious version of LinSoft intended to replicate a multinomial logit regression with alternative specific variables and generic coefficients only """ def __init__(self, input_layer_nodes, output_layer_nodes, bias=None): # bias is unused argument and will be ignored super().__init__() self.neural_network = nn.Sequential( ParsLin(input_layer_nodes, output_layer_nodes), nn.Softmax(dim=1) ) def forward(self, x): logits = self.neural_network(x) return logits ```
Neural network for Multiple integer output
welcome to the site! I think the key word you need to know that defines your task is: multi-target classification or regression. You can find an explanation and some possible techniques at this [link](https://towardsdatascience.com/regression-models-with-multiple-target-variables-8baa75aacd). For neural networks: The key is to remember that the last layer should have linear activations (i.e. no activation at all). As per your requirements, the shape of the input layer would be a vector (135,) and the output (132,). The usual loss function used for regression problems is mean squared error (MSE). Here's an example of multidimensional regression using Keras: ``` model = Sequential() model.add(Dense(200, input_dim = (135,))) model.add(Activation('relu')) model.add(Dense(200)) model.add(Activation('relu')) model.add(Dropout(0.3)) model.add(Dense(132)) model.compile(loss='mean_absolute_error', optimizer='Adam') ```
121086
1
121112
null
2
48
I'm having a hard time understanding the math behind GMMs. A GMM is a weighted sum over K different Gaussian components with parameters $\mu_k, \sigma_k, \pi_k$ From my understanding, the general overview is: - Initialize random parameters for each k'th component - Use pdf $p(x | \mu_k, \sigma_k) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$ to compute a responsibility value for each component - Use this responsibility value to update parameters $\mu_k, \sigma_k, \pi_k$ My questions are: - When and where is Bayes Rule used in this? - When and how exactly does the log likelihood function play into this?
Math of Gaussian Mixture Models & EM Algorithm
CC BY-SA 4.0
null
2023-04-22T12:36:27.827
2023-04-24T04:18:33.087
null
null
149212
[ "clustering", "gaussian", "expectation-maximization", "gmm" ]
Great question! > When and where is Bayes Rule used in this? - Bayes Rule is used in the E-step of the Expectation-Maximization (EM) algorithm for GMMs. In the E-step, the algorithm computes the probability that each data point belongs to each component of the GMM. This is done using Bayes Rule to compute the posterior probability of each component given the data point. Specifically, Bayes Rule is used to compute the conditional probability of the data point given each component, which is then multiplied by the prior probability of the component to obtain the posterior probability. This step is sometimes referred to as the responsibility calculation since it computes the responsibility of each component for each data point. > When and how exactly does the log likelihood function play into this? - The log-likelihood function is used to evaluate the goodness of fit of the GMM to the data. In the M-step of the EM algorithm, the parameters of the GMM are updated to maximize the log-likelihood function. Specifically, the mean, variance, and weight parameters of each component are updated to maximize the expected complete data log-likelihood, which is the sum of the expected log-likelihood of each data point under the current GMM parameters. The EM algorithm iterates between the E-step and the M-step until convergence is reached. The math behind Gaussian Mixture Models (GMMs) can be a bit tricky to understand at first, but once you get the hang of it, it becomes quite intuitive. Bayes Rule is actually used in the initial step of the EM algorithm, which is used to estimate the parameters of the GMM. The EM algorithm is an iterative method that maximizes the likelihood function of the observed data given the GMM parameters. In the E-step of the EM algorithm, we compute the posterior probability of each data point belonging to each Gaussian component using the Bayes Rule. Bayes Rule states that: $$ P(A \mid B)=\frac{P(B \mid A) P(A)}{P(B)} $$ Specifically, we use the following formula: $$ p\left(z_k \mid x_n, \theta^{(t)}\right)=\frac{\pi_k^{(t)} \mathcal{N}\left(x_n \mid \mu_k^{(t)}, \sigma_k^{2(t)}\right)}{\sum_{j=1}^K \pi_j^{(t)} \mathcal{N}\left(x_n \mid \mu_j^{(t)}, \sigma_j^{2(t)}\right)} $$ where - $x_n$ is the $n^{th}$ data point, - $z_k$ is a binary random variable indicating whether $x_n$ belongs to the $k^{th}$ Gaussian component, - $\theta^{(t)}$ represents the current estimate of the GMM parameters, and - $\mathcal{N}(x_n|\mu_k^{(t)},\sigma_k^{2(t)})$ is the probability density function of a Gaussian distribution with mean $\mu_k^{(t)}$ and variance $\sigma_k^{2(t)}$ evaluated at $x_n$. This formula gives us the probability that each data point belongs to each Gaussian component given the current estimate of the GMM parameters. These probabilities are used to update the GMM parameters in the M-step of the algorithm. The log-likelihood function comes into play in the M-step of the EM algorithm. In this step, we update the GMM parameters to maximize the log-likelihood of the observed data. Specifically, we update the mean, variance, and weight parameters of each Gaussian component using the following formulas: $$ \begin{aligned} \mu_k^{(t+1)} & =\frac{\sum_{n=1}^N p\left(z_k \mid x_n, \theta^{(t)}\right) x_n}{\sum_{n=1}^N p\left(z_k \mid x_n, \theta^{(t)}\right)} \\ \sigma_k^{2(t+1)} & =\frac{\sum_{n=1}^N p\left(z_k \mid x_n, \theta^{(t)}\right)\left(x_n-\mu_k^{(t+1)}\right)^2}{\sum_{n=1}^N p\left(z_k \mid x_n, \theta^{(t)}\right)} \\ \pi_k^{(t+1)} & =\frac{\sum_{n=1}^N p\left(z_k \mid x_n, \theta^{(t)}\right)}{N} \end{aligned} $$ where $N$ is the number of data points. After updating the parameters, we compute the log-likelihood of the observed data using the following formula: $$ \ln p(X \mid \theta)=\sum_{n=1}^N \ln \sum_{k=1}^K \pi_k \mathcal{N}\left(x_n \mid \mu_k, \sigma_k^2\right) $$ The goal of the EM algorithm is to iteratively update the GMM parameters to maximize this log-likelihood. When the change in the log-likelihood between iterations is below a certain threshold, we consider the algorithm to have converged and return the final estimate of the GMM parameters. In simple words, the log-likelihood function comes into play during the iterative optimization process. The goal of GMM is to maximize the log-likelihood function with respect to the parameters $\mu_k, \sigma_k, \pi_k$. In other words, we want to find the parameters that make the observed data most likely. During each iteration, we compute the log-likelihood of the data given the current parameter values and then update the parameters to increase the likelihood of the data. This iterative process continues until convergence (i.e. when the change in log-likelihood between successive iterations falls below a certain threshold). I hope this helps clarify the use of Bayes Rule and log likelihood in GMMs!
Expectation Maximization Algorithm (EM) for Gaussian Mixture Models (GMMs)
One reason why you aren't getting fitted values close to the true values could be the initial values of the parameters used. It's likely what you have found is a local maxima. You have to try a number of initial starts and then pick the one with that gives the highest likelihood.
121094
1
121098
null
0
22
let us consider following code -which reads image, applies histogram equalization procedure and display both result : ``` import cv2 import numpy as np img = cv2.imread('original.png', cv2.IMREAD_GRAYSCALE) assert img is not None, "file could not be read, check with os.path.exists()" equ = cv2.equalizeHist(img) res = np.hstack((img,equ)) #stacking images side-by-side cv2.imshow("both image",res) cv2.waitKey(0) cv2.destroyAllWindows() ``` result is : [](https://i.stack.imgur.com/0NrHz.png) we can see clearly that contrast of the image was increased, but as you know almost every part of the AI , Machine learning or computer vision tasks is evaluated by some criteria, for example in deep learning we have notion of loss function(like a binary_crossentropy, mean_squared_error and etc), so my question is : is there any measurment scale whch will be suitable to the contrast increasment ? so if i would like to ask : by how much or how large changement has been made after histogram equalization, is there any score for this? sure i can calculate difference between two image as they have same dimension and calculate square of difference(therefore squared distance between two image) but is it correct? maybe difference between two histogram(difference between two distribution) will be right action to be done?if so could you tell me please how?
Special criteria for histogram equalization measure
CC BY-SA 4.0
null
2023-04-22T21:20:46.270
2023-04-23T13:53:14.750
null
null
149219
[ "python", "opencv" ]
There are multiple ways to do so. In the following, I will list some approaches (there will be a suggestion at the end if you cannot decide for an approach): ##### Types of differences There are 3 main types of differences I would like to start with: - Direct difference between the images (e.g. mean squared difference between pixels, as you mentioned) - Difference between the histogram - Difference between contrast measures. The problem with approach 1 is, that there can be two images with the same difference to the original image, but one has increased, the other one decreased constrast. Just imagine to darken dark pixels and brighten bright pixels in one image (this increases the contrast) and on the other hand brighten dark pixels and darken bright pixels (this decreases contrast). Both will have the same (absolute / square) difference to the original image. Approach 2 gives you a global view on images. The question is how to measure the difference of histograms (see below.) The problem is, that a different histogram does not necessarily mean that the constrast is different. Image a dark image with gray values between 0 and 127 (on a scale from 0 to 255). If you light up the image, the (now it has values 128 to 255), the contrast does not change, just the brightness. Approach 3 allows to directly compare contrasts. It has the potential to take into account local distribution of pixels (yet, I will focus in the following on global contrast definitions). The key is how to measure contrast. ##### Difference between histograms There are different ways to compare histograms / distributions: - One could compute the bin-wise differences, - There is the Kullback-Leibler-Divergence - Another way is the earth-mover- / Wasserstein-distance. While the former two do not measure well by how much pixels are darkened / brightened, the earth-mover distance does exactly this. Imagine a binary image with pixels of gray value 0 or 1 (on the scale up to 255). One method to increase contrast lead to an image with values 0 and 5, another method produces values 0 and 255. Of the three mentioned methods, only Wasserstein detects that the second method creates a larger improvement. Suggestion: Use Wasserstein Distance to compare histograms with respect to contrast. ##### How to measure contrast of an image. There exist difference ways to measure the contrast of an image, e.g. - The Michelson Contrast $$\frac{I_{max}-I_{min}}{I_{max}+I_{min}}$$ only consideres the darkest and brightest values. - The RMS contrast measures the standard deviation of the gray values in the image: $$\sqrt{\frac{1}{NM}\sum_{i=1}^M\sum_{j=1}^N(I_{ij}-\bar{I})^2}$$ with then $I_{ij}$ being a single pixels value and $\bar{I}$ being the average gray value of the image. This takes into account all pixels of the image. ##### Suggestion I would start with the RMS Contrast, but it depends a bit what exactly you are interested in. ##### Outlook All discussed approaches are dealing with the global contrast. There are adaptive / local methods that act differently, depending on the surrounding of a pixel. Image an image that has bright and dark areas. A slightly dark pixel might be brightened in dark areas whereas a slightly dark pixel might be darkened in bright areas to increase contrast. To measure the contrast in such situations, one needs a contrast measure that takes into account the local contrast. RMS contrast over small areas might be a start for that.
Histogram of some values only
Ok, after some digging around I found that I can pass a range = (1,100) and that does the trick.
121095
1
121424
null
0
26
We have a model with an output target that is a 2D tensor. That is, the output represents a set of n classes, evaluated at m bins within the data. That is the output shape is: `[none, n, m]`. Furthermore, the classes are highly imbalanced, so we need to use weights to balance the losses across the classes. For example, the target shape with three classes (A, B, and C), four bins and a batch size of 1 would be: ``` Target = [[[A1, A2, A3, A4], [B1, B2, B3, B4], [C1, C2, C3, C4]]] ``` I've researched the use of Class-weights with Keras and it is extensively discussed, but 99% of the time the output is a class vector with a single target result for each class. In our case, we have a target array for each class. I would hypothesize that we could use a possible weight tensor such as this: ``` Weights = [[Wa, Wa, Wa, Wa], [Wb, Wb, Wb, Wb], [Wc, Wc, Wc, Wc]] ``` Where a constant weight value is applied to each class for each of its bins. But I'm unsure if the shape should be `[n, m]` or `[none, n, m]` or `[b, n, m]` (where b is the batch size). I'm also unsure if this is considered a `loss_weight` or a `class_weight`. I've looked at using `loss_weights`, `class_weights` and `weight_metrics` but the documentation is thin for non-vector outputs. My question: how does one apply weights to a output tensor? EDIT 1: I'm starting to think that the `samples_weights` option may be the best approach here. Here is a [discussion](https://twitter.com/fchollet/status/1471067209569087499?lang=en) by Francois Chollett offering his thoughts.
Defining loss weights with a target tensor
CC BY-SA 4.0
null
2023-04-23T00:41:31.207
2023-05-09T22:17:09.063
2023-04-23T06:12:47.463
30665
30665
[ "keras", "tensorflow" ]
Following the recommendation from @Adam we went ahead and built a custom loss function to accept sample-weights. Because we are using a dataset (`tf.data`) pipeline, we append the sample-weights tensor to the training dataset only, resulting in a three-tuple of: (InputTensor, TargetTensor, WeightTensor). For the test/val dataset, we do not append the weights, resulting in a two-tuple of: (InputTensor, TargetTensor). The sample-weights tensor is constructed as listed above in the question: ``` Weights = [[Wa, Wa, Wa, Wa], [Wb, Wb, Wb, Wb], [Wc, Wc, Wc, Wc]] ``` The loss function is: ``` class WeightedCategoricalCrossentropySimple(tf.keras.losses.Loss): def __init__(self, y_true, y_pred, **kwargs): # Note: these variables are NOT used, but there is a bug in TF that calls the constructor incorrectly with these values; we need to consume them but dont need to use them super().__init__() def call(self, y_true, y_pred, sample_weight = None): # Compute cross-entropy loss for each sample in batch loss_per_sample = tf.keras.losses.categorical_crossentropy(y_true, y_pred) # Apply weighting to each sample in batch, but only if present if sample_weight: loss_per_sample = tf.multiply(loss_per_sample, sample_weight) loss = tf.reduce_mean(loss_per_sample) return loss ``` The dataset pipeline creation code adds one line to apply the weights to the pipelines (if present) just before batching the tensors: ``` def load_tfrecord_dataset(... weights = None # type: np.array ): dataset = tf.data.Dataset.list_files(file_pattern) dataset = dataset.interleave( lambda tfr: tf.data.TFRecordDataset(tfr, compression_type="GZIP"), num_parallel_calls=tf.data.AUTOTUNE) dataset = dataset.shuffle(buffer_size=buffer_size).repeat(1) dataset = dataset.map(_parse_tfrecord_single_class, num_parallel_calls=5) # Appends the weight matrix as a third parameter, but only if present dataset = dataset.map(lambda x, y: append_weights(x, y, weights)) dataset = dataset.batch(batch_size=batch_size, num_parallel_calls=tf.data.AUTOTUNE, drop_remainder=True) dataset = dataset.prefetch(buffer_size=tf.data.experimental.AUTOTUNE) return dataset ``` Where `append_weights()` is defined as: ``` def append_weights(x, y, weights): if weights is not None: return x, y, weights else: return x, y ```
How do I perform weighted loss in multiple outputs on a same model in Tensorflow?
As I understand it, you want to have a model trained on multiple "tasks". If so, this is what it could look like: ``` input_data = <your input sequence input> output_data = <array of size (N, 3)> out1_weight, out2_weight, out3_weight = <your weights for each output to adjust loss contribution> input = Input() simple_rnn = SimpleRNN()(input) # Define the three outputs out1 = Dense(1)(simple_rnn) out2 = Dense(1)(simple_rnn) out3 = Dense(1)(simple_rnn) model = Model(inputs=[input], outputs=[out1, out2, out3], loss_weights=[out1_weight, out2_weight, out3_weight]) model.compile(<whatever params>) model.fit(input_data, [output_data[:, 0], output_data[:, 1], output_data[:, 2]]) ```
121114
1
121160
null
0
22
I am training an XGBoost-model on part of the ForestCover-Dataset. Then I save the trained model to json. Now I load the model and "update" the saved model with the data, that I previously omitted in training, using the "xgb_model" parameter of the train-method. I also save this "updated model" to json. What XGBoost does here is it simply adds trees to the old model. What I want to do now is manipulate the updated model. My first idea was just to delete the "old part" of the model and see how only the new part does. So I manipulated the json-file and deleted the old trees, changed the tree_info parameter, changed the num_trees parameter and changed the IDs of all remaining trees to start at 0 again. Then I attempt to load this manipulated json-file as a new model. However, when trying to do so I get the following error: ``` XGBoostError Traceback (most recent call last) Cell In[22], line 2 1 bst_test = xgb.Booster() ----> 2 bst_test.load_model("modified.json") File ~/personal/folder/lib/python3.10/site-packages/xgboost/core.py:2441, in Booster.load_model(self, fname) 2437 if isinstance(fname, (str, os.PathLike)): 2438 # assume file name, cannot use os.path.exist to check, file can be 2439 # from URL. 2440 fname = os.fspath(os.path.expanduser(fname)) -> 2441 _check_call(_LIB.XGBoosterLoadModel( 2442 self.handle, c_str(fname))) 2443 elif isinstance(fname, bytearray): 2444 buf = fname File ~/personal/folder/lib/python3.10/site-packages/xgboost/core.py:279, in _check_call(ret) 268 """Check the return value of C API call 269 270 This function will raise exception when error occurs. (...) 276 return value from API calls 277 """ 278 if ret != 0: --> 279 raise XGBoostError(py_str(_LIB.XGBGetLastError())) XGBoostError: [11:25:21] ../include/xgboost/json.h:81: Invalid cast, from Integer to String ``` Can anyone help me understand what is going on here?
Deleting part of saved XGBoost-Model (JSON) and reloading it
CC BY-SA 4.0
null
2023-04-24T10:36:22.920
2023-04-26T14:50:58.193
null
null
149253
[ "machine-learning", "xgboost", "data-science-model", "json" ]
In case anyone stumbles across a similar issue: The parameter "num_trees" is stored as a string not an integer. I updated it as integer, which caused the problem.
Can I fine tune the xgboost model instead of re-training it?
I see that in the current version of python wrapper of xgboost you can specify file name or existing xgboost model (class Booster) in train function.
121120
1
121142
null
0
37
From what I know, AdaBoost works by concat-ing a weak classifier(ussually a one-level decision tree) to the previous linear combination of other weak classifiers to improve its accuracy after each iteration. In other words, at each iteration m, It'll choose $\alpha_{m}$ and $h_{m}$ such that the error function $E = \sum_{i=1}^{N}e^{-y_{i}C^{(m)}(x_i)}$ is minimised, where: $$C_m = C_{m-1} + \alpha_{m}h_{m}$$. My question is: Okay. It minimizes the error function $E$ for $C_{m}$ but how does that compare to the error of $C_{m-1}$. How do you make sure it'll do better each time? In other words, what's the mathematics that tell us $C_m$ will do "slightly better"? My thinking process is: $E$ considers all samples so minimizing $E$ also means that it will: - Try hard on these large-weight samples by adjusting $\alpha_{m}$ and $h_{m}$. - But also try its best to keep $C_m$ as close to $C_{m - 1}$ as possible. This is easy to visualise since $C_m = C_{m-1} + \alpha_{m}h_{m}$. It's not quite mathematical! My reference: [AdaBoost-derivation](https://en.wikipedia.org/wiki/AdaBoost)
How does Adaboost reassure us that It'll do better after each iteration?
CC BY-SA 4.0
null
2023-04-24T15:34:22.670
2023-04-25T17:40:20.293
2023-04-24T19:51:34.363
149261
149261
[ "classification", "adaboost" ]
Good question! You are correct that AdaBoost works by iteratively adding weak classifiers to the overall model to improve its accuracy. The key to understanding how AdaBoost ensures that each new classifier does better than the previous one lies in the fact that it assigns a weight to each training example, which is updated after each iteration. Each iteration $m$ adjusts the weights of the misclassified samples from the previous iteration. The idea is that the next weak classifier should focus on the samples that were misclassified by the previous weak classifiers, so that the overall classification error decreases. At the start of the algorithm, all weights are set to $w_i = \frac{1}{N}$, where $N$ is the number of training examples. After each iteration, the weights are updated based on whether the classifier correctly classified each example or not. Specifically, the weights of the misclassified examples are increased, while the weights of the correctly classified examples are decreased. This way, the misclassified examples receive more attention in the next iteration, allowing the classifier to focus on these examples and improve its performance on them. At the end of each iteration, the weights are normalized so that they sum to 1. The resulting weight vector is used to train the next weak classifier, with the aim of minimizing the weighted training error. The weight vector also determines the importance of the weak classifier in the final model: classifiers that perform well on the high-weight examples are given more weight in the final model. More precisely, let $D_m$ be the weight distribution over the training samples at iteration $m$. Initially, $D_1(i) = 1/N$ for all $i$. Let $h_m$ be the weak classifier at iteration $m$ that minimizes the weighted error on the training set: $$ h_m=\operatorname{argmin}_{h \in \mathcal{H}} \sum_{i=1}^N D_m(i) \cdot\left[y_i \neq h\left(x_i\right)\right] $$ where $\mathcal{H}$ is the set of weak classifiers (e.g., decision stumps). The coefficient $\alpha_m$ is then chosen to minimize the exponential loss: $$ \alpha_m=\frac{1}{2} \ln \left(\frac{1-\epsilon_m}{\epsilon_m}\right) $$ where $\epsilon_m$ is the weighted error of $h_m$: $$ \epsilon_m=\sum_{i=1}^N D_m(i) \cdot\left[y_i \neq h_m\left(x_i\right)\right] $$ Then, the weight distribution is updated as follows: $$ D_{m+1}(i)=\frac{D_m(i) \cdot \exp \left(-\alpha_m y_i h_m\left(x_i\right)\right)}{Z_m} $$ where $Z_m$ is a normalization constant. The intuition is that if $y_i h_m(x_i) > 0$, then the sample $x_i$ was classified correctly by $h_m$, and its weight $D_m(i)$ should be decreased. Otherwise, if $y_i h_m(x_i) < 0$, then $x_i$ was misclassified, and its weight should be increased. In other words, the weight distribution $D_m$ is biased towards the misclassified samples. Finally, the weak classifier $h_m$ is combined with the previous weak classifiers to form the final strong classifier $C_M$: $$ C_M(x)=\operatorname{sign}\left(\sum_{m=1}^M \alpha_m h_m(x)\right) $$ So, to answer your question, AdaBoost reassures us that it will do better after each iteration because it adjusts the weight distribution $D_m$ so that the next weak classifier $h_{m+1}$ focuses on the misclassified samples from the previous iteration. This should lead to a reduction in the training error $E_m$ of $C_m$ compared to $C_{m-1}$. Additionally, the coefficient $\alpha_m$ is chosen to give more weight to the more accurate classifiers, which helps to further improve the performance of the final strong classifier $C_M$.
Why would removing a variable in adaboost decrease error rate?
Imagine that one of the column is just random data -- then it's not informative at all, so no classifier will be improved by including it. However, `ada`'s stochastic boosting implementations will always have some chance of including that variable in the classifier it generates. As a result, removing it has the potential to improve the classifiers generated. (In your case, you might check whether that variable is part of the final model generated.)
121121
1
121122
null
0
28
I am trying to fine-tune a Bert model for sentiment analysis. Instead of one sentence, my inputs are documents (including several sentences) and I am not removing dots. I was wondering if is it okay to use just the embedding of the first token in such cases. If not, what should I do?
Bert model for document sentiment classification
CC BY-SA 4.0
null
2023-04-24T16:43:35.873
2023-04-24T17:54:41.287
null
null
134776
[ "deep-learning", "nlp", "transformer", "bert", "sentiment-analysis" ]
Yes, it's perfectly fine to fine-tune BERT on sequences comprised of more than one sentence, and the standard way of using BERT for text classification is with the ouput vector at the first position. However, take into account that the maximum length of BERT's input sequences is 512 tokens, so your documents should be short enough to fit in that.
Limitations of NLP BERT model for sentiment analysis
BERT is pre-trained on two generic tasks: masked language modeling and next sentence prediction. Therefore, those tasks are the only things it can do. If you want to use it for any other thing, it needs to be fine-tuned on the specific task you want it to do, and, therefore, you need training data, either coming from human annotations or from any other source you deem appropriate. The point of fine-tuning BERT instead of training a model from scratch is that the final performance is probably going to be better with BERT. This is because the weights learned during the pre-training of BERT serve as a good starting point for the model to accomplish typical downstream NLP tasks like sentiment classification. In the article that you referenced, the authors describe that they fine-tune [a Chinese BERT model](https://huggingface.co/hfl/chinese-bert-wwm-ext) on their human-annotated data multiple times separately: - To classify whether a Weibo post refers to COVID-19 or not. - To classify whether posts contained criticism or support. - To identify posts containing criticism directed at the government or not. - To identify posts containing support directed at the government or not. Fine-tuning BERT usually gives better results than just training a model from scratch because BERT was trained on a very large dataset. This makes the internal text representations computed by BERT more robust to infrequent text patterns that would be hardly present in a smaller training set. Also, dictionary-based sentiment analysis tends to give worse results than fine-tuning BERT because a dictionary-based approach would hardly grasp the nuances of language, where not only does a "not" change all the meaning of a sentence, but any grammatical construction can give subtle meaning changes.
121140
1
121143
null
0
25
Similar to [this](https://datascience.stackexchange.com/questions/64631/why-rnns-necessary-for-time-series) question but I would like further clarification. I understand that in abstract, RNNs can process inputs recursively and feed some state of memory through the recursion to have a sense of context and order. However, why can a normal NN not achieve this? The input vector is inherently ordered. For example in language modelling, one might define a length for each token, and input a series of these tokens into the standard NN, and the NN could work out by itself that these are ordered and infer context in order to output its best prediction of the next token. Is the benefit that the input be 1 token at a time and so the RNN needs less complexity? Or is there something about RNN's that a normal NN simply cannot achieve? Or are they just more effective at interpreting the ordered nature of the input? If so why? I suppose I could generalise the question to - why do we need any kind of specific NN? Can a normal NN not approximate any function? Surely it could therefore learn any behaviour that some specific kind of NN exhibits?
Why is a RNN inherently better for Time series than normal NN?
CC BY-SA 4.0
null
2023-04-25T16:36:57.337
2023-04-25T17:46:37.800
null
null
149300
[ "neural-network", "time-series", "rnn", "beginner" ]
Certainly, the original language model by [Bengio et al, 2003](https://jmlr.org/papers/volume3/tmp/bengio03a.pdf) worked with "normal NNs". However, they worked by simply concatenating word embeddings and then applying the transformation $y = b + Wx + U \mathsf{tanh}(d + Hx)$. This kind of language model presents some problems: - Scalability: the longer you want the context window to be, the larger the matrix multiplications you need. - Training efficiency: you cannot train for all the output words of a sequence, that is, you can only train one output word at a time. Therefore, each sequence in the training data leads to multiple training data points (one per each possible location of the context window within the sequence). - Order-dependent representations (lack of generalization): the representations learned for a word appearing at a specific position within the context window cannot be generalized to other positions. Modern language models are all RNNs or Transformers, which certainly don't have the aforementioned problems (although Transformers do have problems scaling the context window due to the quadratic memory requirements of attention).
Why are RNN/LSTM preferred in time series analysis and not other NN?
I'll try to provide some insight which will hopefully help. - Can a normal NN model the time connections the same way like a RNN/LSTM does when it is just deep enough? Every neural net gets better in theory if it gets deeper. For a regular NN to model time connections properly, you could use the last n time steps as your input and the n+1th time step as your target. This will generate your training set and depending on your data, you could be able to model your time series fairly efficiently. All of the most obvious pitfalls of this approach are actually addressed by RNNs/LSTM. - Does an RNN need more or less data in comparison to a NN to get the same/ better results? Difficult question, which would probably require some empirical results to check that theory. Also, with neural nets, sometimes, it's more about how fast it trains rather than how much training data it has that will make the biggest difference in performance. To me, the main difference is that your regular NN will need a fixed-size input, whereas your RNN will be able to learn with input "up to" a certain size, which can be a big advantage to model the entire time series well. - Are there time series where normal NN or RNN/LSTM perform better? Again, this is a difficult question as it will depend on the data, the architecture of the networks, the training time etc. I haven't done empirical research on this, but I'd say that your best guess would be time series that are rather short (less than 100 time steps for instance). If the time series is short, you might not need to model such an intricate relationship through time, which a regular NN could perhaps do as well as an RNN. - Is it time data depended which Model will perform the best or are there some guidelines? As explained above, every learning exercise will highly depend on the architecture and hyperparameter used, even the initialization of your weights, whether you use pre-training or not, and of course, your data. Again, I think that the shortest the time series, the more competitive regular NNs could be. But again, this is merely intuition-based and hasn't been checked thoroughly. - Can the NN behavior be understood better than the RNN behavior? It all depends on what you mean by understanding. RNN have more weights than NN so ultimately, there will be more things to analyze and eventually understand. But, with more data also comes more information, so perhaps there is more information to be gained from an RNN than a simple NN, even if it's a deep one. Plus, sometimes, based on the initialization of the weights and other parameters, the interpretation of the model could vary, even if it's trained on the same data.
121147
1
121150
null
0
75
The Problem I'm learning ML these days. I'm training a dataset with 10,000+ samples with 20+ features, the model I picked to train was "Logistic Regression", I have some problems in my mind, "Unclear areas", which are making my brain boil. So I'm trying to clear those and keep on learning what I can, but yet couldn't find where to really start solving it. So I decided to get a little community support by at least a two or one reference that will solve where I'm stuck. Explanation of what I tried and what is this "Problem" So, before start explaining, I will tell you what my setting so far; - Logistic Regression - Using as a classification Problem - Binary Classification(1,0) - 30% of testing data - all 100% of the data are balanced well with equal number of labels-> Binary Classification - Number of features selected: 14 Current workaround that I did: - I trained the above mentioned model with above setting + getting the support of sci kit learn as well, however in the accuracy_score() function with normalization=true set, I got a performance value of 1.(The best performance as to the function standards as doc says). Back to the problem explanation I have these problems in my head right now; - Am I doing this correct? I mean, for logistic regression with 14 features(X[0..13]), and 2(y[0..1]) class labels. - Most Importantly I want to interpret this in a plot, where can I learn to do that, I have 14 features, I mean lets say that I had 6 features, where am I doing wrong? - I want to check this on decision boundaries, how can I do that? (Testing, and trained decision boundaries), I cannot do that as well since I'm having more than 2 features. - How can I further more verify this test is an under fit or an over fit? Thank you for reading, hope you will provide your answer with care.
Clarifying some unclear Areas of model training, python, Machine Learning
CC BY-SA 4.0
null
2023-04-26T03:30:44.940
2023-04-26T08:47:52.520
2023-04-26T08:47:52.520
95811
95811
[ "machine-learning", "python", "scikit-learn", "machine-learning-model" ]
Do you have a single class and you're trying to predict whether or not the input is an instance of it? In this case, you're doing binary classification with logistic regression, though you only need 1 output: your model would predict the probability that the input belongs to the class, and will vary between 0 and 1. If the output of the model is >= 0.5, then the input is predicted to belong to the class, otherwise no. If you have two or more classes, then you need two or more outputs (i.e. your y[]s) and you would want to do softmax regression where your model predicts the probability of each class and then you take the predicted class with the highest probability. For visualizing, you want to do dimensionality reduction. There are several ways to do this and it's a large subject in itself, but probably the easiest way to start is to project your 14 dimensions down to 2 and plot them. Scikit-learn has a PCA class which makes this straightforward: ``` pca = PCA(n_components=2) X_projected = pca.fit_transform(X_in) ``` To check for overfitting or underfitting, you generally want to separate out some percentage of your data for testing purposes only, not used for training/fitting the model. If I understand correctly, you have already done this with 30% of your data? Then you check the ability of your model to predict the class of the test data it hasn't seen. Overfitting models will perform much better on training data vs. test data. hth.
Python Machine Learning Experts
You could try some competitions from [kaggle](http://kaggle.com). Data Science courses from Coursera, edX, etc also provide forums for discussion. Linkedin or freelance sites could be other possibilities.
121185
1
121189
null
0
20
As per PyTorch documentation CrossEntropyLoss() is a combination of LogSoftMax() and NLLLoss() function. However, calling CrossEntropyLoss() gives different results compared to calling LogSoftMax() and NLLLoss() as seen from the output of the given code. What could be causing different results here ? > Cross Entropy from PyTorch: tensor(2.3573) Cross Entropy from Manual_PyTorch_NNLoss: tensor(1.0137) ``` def CrossEntropyPyTorch(values, actualProb): tensorValues = torch.FloatTensor(values) tensorActualProb = torch.FloatTensor(actualProb) criterion = nn.CrossEntropyLoss() #LogSoftMax + NNLoss loss = criterion(tensorValues, tensorActualProb) return loss def CrossEntropyManual_PyTorch_NNLoss(values, actualProb): tensor = torch.FloatTensor(values) tensorValues = nn.LogSoftmax()(tensor) #Apply NNLoss criterion = nn.NLLLoss() tensorActualProb = torch.LongTensor(actualProb) loss = criterion(tensorValues, tensorActualProb) return loss if __name__ == '__main__': values = [-.03, .4, .5] actualProb = [1,0,1] print("Cross Entropy from PyTorch:",CrossEntropyPyTorch(values,actualProb)) print("Cross Entropy from Manual_PyTorch_NNLoss:",CrossEntropyManual_PyTorch_NNLoss(values,actualProb)) ```
PyTorch CrossEntropyLoss and Log_SoftMAx + NLLLoss give different results
CC BY-SA 4.0
null
2023-04-27T13:19:39.093
2023-04-27T16:40:29.087
null
null
149366
[ "pytorch", "loss-function", "softmax", "cross-entropy" ]
There are a few of problems here: - actualProb is not a valid categorical probability distribution because the values don't add up to 1. - You are converting probabilities to integers by invoking torch.LongTensor. - nn.NLLLoss is meant to receive the class indices, not the probabilities (I guess that's why you used torch.LongTensor). - While CrossEntropyLoss accepts both probabilities and class indices, its documentation specifies that it is only equivalent to LogSoftMax and nn.NLLLoss for the case of indices. Here is an amended example: ``` import torch from torch import nn def CrossEntropyPyTorch(values, actualClass): tensorValues = torch.FloatTensor(values) tensorActualClass = torch.LongTensor(actualClass) criterion = nn.CrossEntropyLoss() #LogSoftMax + NNLoss loss = criterion(tensorValues, tensorActualClass) return loss def CrossEntropyManual_PyTorch_NNLoss(values, actualClass): tensor = torch.FloatTensor(values) tensorValues = nn.LogSoftmax()(tensor) #Apply NNLoss criterion = nn.NLLLoss() tensorActualClass = torch.LongTensor(actualClass) loss = criterion(tensorValues, tensorActualClass) return loss if __name__ == '__main__': values = [[-.03, .4, .5]] actualClass = [2] # the correct option is the second class print("Cross Entropy from PyTorch:", CrossEntropyPyTorch(values, actualClass)) print("Cross Entropy from Manual_PyTorch_NNLoss:",CrossEntropyManual_PyTorch_NNLoss(values, actualClass)) ``` It's output is: ``` Cross Entropy from PyTorch: tensor(0.9137) Cross Entropy from Manual_PyTorch_NNLoss: tensor(0.9137) ```
Difference between mathematical and Tensorflow implementation of Softmax Crossentropy with logit
As I understand it, the softmax function for $z_i$ is given by $a_i$. Then just taking the loss you've defined you get back exactly the formula that is implemented. The way it is written down however is, as you mentioned, to avoid underflow/overflow. For instance, suppose you want to compute the following: $A=\log(\sum_{i=1}^{4}\exp(z_i))$, with $z_i=(-1000.5,-2000.5,-3000.5,-4000.5)$ Clearly, if you just type in the formula directly, you will get an underflow error. Instead if you isolate the main contribution in the exponential by taking the $\max(z_i)$, the same formula can be written as: $A=\max_i(z_i)+\log(\sum_{i=1}^{4}\exp(z_i-\max_i(z_i)))$ The difference now is that the expression is "numerically stable" and we see that $A\approx -1000.5$. Thus, let's make the softmax numerically stable: \begin{align} \log(a_i)&=z_i-\log(\sum_j e^{z_j})\\ &=z_i-\max_j(z_j)-\log(\sum_je^{z_j-\max_j(z_j)}) \end{align} which is the expression that is implemented for the loss (just multiply by $y_i$ and sum over $i$).
121200
1
121201
null
0
74
I have text with each line in following format: ``` <text-1> some text-1 <text-2> some text-2 <text-3> some text-3 ``` I want fine tune model to learn generate `some text-3` after reading `some text-1` and `some text-2`. In GPT2 and T5 text generation tutorials, we do specify `input-ids` for target text i.e. labels, but for GPT2 we dont. For example in [this T5 text generation tutorial](https://medium.com/nlplanet/a-full-guide-to-finetuning-t5-for-text2text-and-building-a-demo-with-streamlit-c72009631887), we can find line: ``` model_inputs["labels"] = labels["input_ids"] ``` But I could not find any such line in these GPT2 text generation examples: - huggingtweets demo, - huggingartists demo - Finetune GPT2 for text generation
Passing target text to gpt2 and T5 for fine tuning to learn text generation task
CC BY-SA 4.0
null
2023-04-27T20:33:02.490
2023-04-27T23:59:40.333
null
null
107895
[ "nlp", "language-model", "gpt", "huggingface", "t5" ]
Note that: - In your first and second GPT-2 links, the logic to feed data to the model is handled by the Trainer class, that's why they don't need to explicitly prepare the input and output data and give it to the model. - In your third GPT-2 link, you can find the place where the expected output (i.e. labels) is passed to the model (which internally shifts them to meet the actual expectations of the Transformer decoder): outputs = model(input_tensor, labels=input_tensor) Each implementation is different, even for the same model. Looking for known structures in the code is usually effective, but sometimes it is not, and you need to actually dive into the code to understand what it does.
Text Generation
Text generation can be done in JavaScript with RNN/LSTM. For example, TensorFlow.js is a JavaScript implementation of TensorFlow. Since the dataset is very small (25k words), model can be run in JS as well. Following is an example of text generation in JS : [https://github.com/reiinakano/tfjs-lstm-text-generation](https://github.com/reiinakano/tfjs-lstm-text-generation)
121224
1
121279
null
0
45
My testing accuracy is way higher than my training accuracy. I have used feature selection and split the data into training, validation and test sets. ``` anova_filter = SelectKBest(f_classif, k=4) rng = np.random.rand X_train, X_val, Y_train, Y_val = train_test_split(X, Y, test_size = 0.40, shuffle = False, random_state = rng) X_val, X_test, Y_val, Y_test = train_test_split(X_val, Y_val, test_size = 0.50, shuffle = False, random_state =rng) #fitting the dataset anova_svm.fit(X_train, Y_train) #Predicting Values Y_pred = anova_svm.predict(X_val) X_train_pred = anova_svm.predict(X_train) training_data_accuracy = accuracy_score(Y_train, X_train_pred) testing_data_accuracy = accuracy_score(Y_val, Y_pred) ``` [](https://i.stack.imgur.com/bnaFE.png)
Testing accuracy is higher than training accuracy
CC BY-SA 4.0
null
2023-04-29T10:12:52.473
2023-05-02T11:22:52.780
2023-05-02T11:09:05.183
149424
149424
[ "machine-learning", "python", "classification", "feature-selection", "accuracy" ]
Your testing dataset is strongly imbalanced. You have 82 samples in the positive class and only 3 classes in the negative class. By simply guessing "everything positive" your model would achieve 96.5% accuracy. This is a common problem in unbalanced datasets. I don't know what your data is exactly, so it is difficult to make a precise suggestion as to what you should change, but calculating the [Balanced Accuracy](https://en.wikipedia.org/wiki/Confusion_matrix#Table_of_confusion) which is the accuracy of the individual classes weighted equally instead of by their contribution, might be a good start. Evaluating your model's performance based on precision and recall might be a good option, too. I might add however, that just 3 samples in the negative class is probably too little to make a good assumption about the performance of your model anyway.
Why is my test data accuracy higher than my training data?
I assume you're using structured data (numerical, categorical, nominal, ordinal..): - It's probably due to class imbalance. - If you use Scikit-Learn, you can add class_weight = "balanced" which will automatically weigh classes inversely proportional to their frequency. - Testing this should confirm if it's a class imbalance problem. PS: Francois Chollet (create of Keras) states that traditional algorithms are superior to Deep Learning for structured data. Personally, with structured data, I've never been able to match the performance of XGBoost with Deep Learning. [](https://i.stack.imgur.com/SlH7U.jpg) [](https://i.stack.imgur.com/vuogA.png)
121226
1
121242
null
0
23
I'm towards the completion of my first data science project that will go into my GitHub portfolio. I'll be happy for some clarification regarding the machine learning models section: I got a little confused with the steps: evaluation model, baseline model, cross-validation, fit-predict, when to use (X, y), and when to split the data with train_test_split and use (X_train, y_train). Dataset from Kaggle - Stroke Prediction: [https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset?datasetId=1120859&sortBy=voteCount&searchQuery=models](https://www.kaggle.com/datasets/fedesoriano/stroke-prediction-dataset?datasetId=1120859&sortBy=voteCount&searchQuery=models) The dataset contains 5110 observations with 10 attributes and a target variable: 'stroke'. The dataset is unbalanced, with 5% positive for stroke. I tried to follow different projects, however, because each one has its own way, I got lost with what is the correct way and what is optional. This is what I have so far: Baseline model: ``` def load_data (): df = pd.read_csv('healthcare-dataset-stroke-data.csv') df=df.drop('id', axis=1) categorical = [ 'hypertension', 'heart_disease', 'ever_married','work_type', 'Residence_type', 'smoking_status'] numerical = ['avg_glucose_level', 'bmi','age'] y= df['stroke'] X = df.drop('stroke', axis=1) return X,y,categorical, numerical def baseline_model(X, y, model): transformer = ColumnTransformer(transformers=[('imp',SimpleImputer(strategy='median'),numerical),('o',OneHotEncoder(),categorical)]) pipeline = Pipeline(steps=[('t', transformer),('p',PowerTransformer(method='yeo-johnson')),('m', model)]) cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) scores = cross_val_score(pipeline, X, y, scoring='roc_auc', cv=cv, n_jobs=-1) return scores X,y,categorical, numerical= load_data() model = DummyClassifier(strategy='constant', constant=1) scores = baseline_model(X, y, model) print('Mean roc_auc: %.3f (%.3f)' % (np.mean(scores), np.std(scores))) ``` Output: ``` Mean roc_auc: 0.500 (0.000) ``` Evaluation model: ``` def evaluate_model(X, y, model): cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=42) scores = cross_val_score(model, X, y, scoring='roc_auc', cv=cv, n_jobs=-1) return scores ``` The models are: ``` def get_models(): models, names = list(), list() models.append(DecisionTreeClassifier(random_state=42)) names.append('DT') models.append(RandomForestClassifier(random_state=42)) names.append('RF') models.append(XGBClassifier(random_state=42, eval_metric='error')) names.append('XGB') models.append(LogisticRegression(solver='liblinear')) names.append('LR') models.append(LinearDiscriminantAnalysis()) names.append('LDA') models.append(SVC(gamma='scale')) names.append('SVM') return models, names ``` First model: ``` X,y,categorical, numerical= load_data() print(X.shape, y.shape) models, names = get_models() results = list() for i in range(len(models)): transformer = ColumnTransformer(transformers=[('imp',SimpleImputer(strategy='median'),numerical),('o',OneHotEncoder(),categorical)]) pipeline = Pipeline(steps=[('t', transformer),('p',PowerTransformer(method='yeo-johnson')),('m', models[i])]) scores = evaluate_model(X, y, pipeline) results.append(scores) print('>%s %.3f (%.3f)' % (names[i], np.mean(scores), np.std(scores))) ``` Output: ``` (5110, 10) (5110,) >DT 0.555 (0.034) >RF 0.781 (0.030) >XGB 0.809 (0.026) >LR 0.839 (0.029) >LDA 0.833 (0.030) >SVM 0.649 (0.064) ``` Second model with SMOTE: ``` for i in range(len(models)): transformer = ColumnTransformer(transformers=[('imp',SimpleImputer(strategy='median'),numerical),('o',OneHotEncoder(),categorical)]) pipeline = Pipeline(steps=[('t', transformer),('p',PowerTransformer(method='yeo-johnson', standardize=True)),('over', SMOTE()), ('m', models[i])]) scores = evaluate_model(X, y, pipeline) results.append(scores) print('>%s %.3f (%.3f)' % (names[i], np.mean(scores), np.std(scores))) ``` Output: ``` (5110, 10) (5110,) >DT 0.579 (0.036) >RF 0.765 (0.027) >XGB 0.778 (0.031) >LR 0.837 (0.029) >LDA 0.839 (0.030) >SVM 0.766 (0.040) ``` Logistic Regression Hyperparameter Tuning: ``` transformer = ColumnTransformer(transformers=[('imp',SimpleImputer(strategy='median'),numerical),('o',OneHotEncoder(),categorical)]) pipeline = Pipeline(steps=[('t', transformer),('p',PowerTransformer(method='yeo-johnson', standardize=True)),('s',SMOTE()),('m', LogisticRegression())]) param_grid = { 'm__penalty': ['l1', 'l2'], 'm__C': [0.001, 0.01, 0.1, 1, 10, 100] } cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=42) grid = GridSearchCV(pipeline, param_grid, scoring='roc_auc', cv=cv, n_jobs=-1) grid.fit(X, y) print("Best hyperparameters: ", grid.best_params_) print("Best ROC AUC score: ", grid.best_score_) ``` Output: ``` Best hyperparameters: {'m__C': 0.01, 'm__penalty': 'l2'} Best ROC AUC score: 0.8371495917165929 ``` My questions are: First: Is it possible to end a project like this? OR Do I need to split the data into train/test subsets and make a prediction on unseen data after training with the best parameters? (See below) Second: When I use fit/predict: ``` X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.8, random_state=42) logreg_pipeline = Pipeline(steps=[('t', transformer),('p',PowerTransformer(method='yeo-johnson', standardize=True)),('over', SMOTE()), ('m', LogisticRegression(C=0.01,penalty='l2',random_state=42))]) logreg_pipeline.fit(X_train,y_train) logreg_tuned_pred = logreg_pipeline.predict(X_test) print(classification_report(y_test,logreg_tuned_pred)) print('Accuracy Score: ',accuracy_score(y_test,logreg_tuned_pred)) print('ROC AUC Score: ',roc_auc_score(y_test,logreg_tuned_pred)) ``` Output: ``` precision recall f1-score support 0 0.98 0.74 0.85 960 1 0.17 0.82 0.28 62 accuracy 0.75 1022 macro avg 0.58 0.78 0.57 1022 weighted avg 0.94 0.75 0.81 1022 Accuracy Score: 0.7475538160469667 ROC AUC Score: 0.7826444892473119 ``` Is this right and a necessary step? What is the right way to read this result? Do I compare it to the roc_auc score from the cross-validation/baseline model that was executed above? I'd be happy to clarify any misunderstanding so that this whole issue will finally be clear to me. Thank you for your time and feedback :)
Flow of machine learning model including code
CC BY-SA 4.0
null
2023-04-29T11:25:15.277
2023-04-30T09:05:21.467
null
null
138856
[ "machine-learning-model", "data-science-model", "cross-validation", "hyperparameter-tuning" ]
The purpose of a machine learning model is to make predictions on real-world data that isn’t known at model training time. As such, it’s best practice to always do a train-test split at the very beginning of any project, and only use the training data for training the model. The test data should not be used at all until your model is fully trained. To add to this, when tuning the model’s hyperparameters there is an additional subset of the training data used for validation, which is not used for training but for evaluating performance during training. You create train-test-splits of your input data, run through all of your models, and use your aggregate cross-validation score to choose one or two models to concentrate on improving. Based on your results, it looks like logistic regression is getting the highest score, and is probably a good fit for this type of problem – predicting whether an instance of the data is a member of the target or not (“stroke” or “not stroke”). Once this is done, you can tune your model’s hyperparameters (using GridSearch like you’re doing for example) to determine the best parameters for things like regularization (the “C” parameter). Then, and only then, when you have selected your model, tuned the hyperparameters, and trained on your training data only, then you evaluate performance on your test data. For the evaluation, it’s good to understand the performance of your model and what that represents, that’s what your metrics at the end are for. Precision is percentage of true positives over true positives and false positives, and recall is true positives over true positives plus false negatives. F1 score is the harmonic mean of these two values, ROC is the performance of the model at different classification thresholds. If the purpose of the model is to predict strokes, do you want a higher precision which would mean you detect more potential strokes at the risk of higher false positives? Or a higher recall which would mean all the instances classified as high risk of stroke are more likely to be high risk of stroke but at the cost of potentially missing some? Hth,
General Machine Learning Workflow Question
I agree with most of the answer. However, I think you are missing some points including the cross-validation step. I try below to provide an overview of a common machine learning project. I assume a common project is a supervised machine learning problem (like iris dataset). 1. You defining the 'scope' or aim of the project : - You have to define the purpose of the learning. - When working with business in a company, it is a good idea to correctly express the need, the value of the project and its goal. - You have to define the evaluation metrics (accuracy, recall, F1 score, AUC...). You can also define a minimum result you want to reach (say 80% accuracy for example). - You can also ask yourself about the level of interpretability you need (do you care about model explanation? If no, maybe you could try more blackbox algorithms such as boosting, neural networks...). 2. Explore your data : - Using statistics, visualization and intuition, try to learn your dataset and understand your features and labels. - You can also search for missing data and outliers. Correct these observations refers as data cleaning process. - Understanding your input variables will greatly help you to create and select relevant features. 3. Generating features/attributes from your own code/algorithm : - This phase refers as features engineering. It is about creating features relevant to the learning problem. - In this phase, you can clean your missing data and outliers in order to help the learning. - You can derive new features from input variables relevant to your learning problem (handle categorical variables, rescale your features, apply transformations on input variables). 4. cross-validation : - Cross-validation refers to your algorithm evaluation. In supervised machine learning, it is common, at least, to split dataset into 3 datasets (train, validation and test). - Train dataset (about 60% of data) aims to train the algorithm. - Validation dataset (20% of data) helps to find the best hyperparameters of your model (max depth for a tree, regularization for a linear/logistic regression...). - Finally, test set (20% of data) gives you the true result you get on unseen data. It is the final evaluation. 5. Machine learning, feed the matrix into an algorithm : - In this part, you train machine learning algorithms with regards to the cross-validation process (part 4). - You can test different models. Some yield different performance results. Interpretability is not the same neither. - To help the learning, you can diagnose your algorithm performs on both train and validation sets. This diagnostic is also called learning curves. It can tell you how to improve your learning. The purpose of learning curve is to help handle the underfitting/overfitting tradeoff. Underfitting is when you have a large bias error meaning your algorithm is not complex enough while overfitting means your algorithm is too complex and learns perfectly but is not able anymore to generalize learning on new unseen observations. - You can also look at residuals (errors between predictions and real values) to improve your algorithm. - Make features selection may also improve your algorithm learning. 6. Restitution - Interpret the model and the performance you get. - Create restitutions to business? Run into production? Improve your machine learning model and performance is mostly about improving the above introduced points. By making new exploration, create new features, try a more powerful algorithm and so on, you can reach best results. Machine learning is a whole pipeline you have to optimize. I also think machine learning projects managements are really suitable with agile approaches.
121261
1
121263
null
0
31
First, my 3 separate scenarios and my input image [](https://i.stack.imgur.com/PSMFV.jpg) Scenario 1: Copying input image onto new variable by " = " > a = cv2.imread("/content/consec2.jpg") b = a b [b < 200] = 0 #Some random change to an image plt.imshow(a, cmap='gray') Image Displayed: ("changed" image) [](https://i.stack.imgur.com/kkcId.png) Scenario 2: Copying input image onto new variable by .copy() method > a = cv2.imread("/content/consec2.jpg") b = a.copy() b [b < 100] = 0 #Same random change to the image as in scenario 1 plt.imshow(a, cmap='gray') Image Displayed: The original input image Scenario 3: Normal calculation (Disregard the image here) ``` a = 10 b = a print (a) ``` Result: the printed value comes out to 10 (This scenario is just for extra refernce). QUESTION: Why are scenario 1 and scenario 2 giving differnet results? Shouldn't the " = " and .copy() function the same way in this case?
Difference in .copy() vs = for copy image into new variable
CC BY-SA 4.0
null
2023-05-01T13:22:26.100
2023-05-01T14:28:30.430
null
null
103857
[ "python", "opencv" ]
In scenario 1, the underlying memory is used for both Mat a and Mat b, so when you change b, you end up changing a as well, i.e. it's just a reference to the underlying Mat that is copied. From the opencv docs: [https://docs.opencv.org/4.x/d6/d6d/tutorial_mat_the_basic_image_container.html](https://docs.opencv.org/4.x/d6/d6d/tutorial_mat_the_basic_image_container.html) "All the above objects, in the end, point to the same single data matrix and making a modification using any of them will affect all the other ones as well." In scenario 2, you have created a new Mat through the call to a.copy() and initialized the underlying memory, so changes to b won't impact a. This is the same for all objects in python btw, it has nothing to do specifically with opencv. [https://docs.python.org/3/library/copy.html](https://docs.python.org/3/library/copy.html) hth
What is the difference between using numpy array images and using images files in deep learning?
In order to pass an image as an input to a model first need to convert it to a numpy array. Each image actually is represented as an array of values when you load it into python. Even if you don't do it explicitly (i.e. through keras' `ImageDataGenerator`), it is done behind the scenes. If your question is: Is it better to use generators than loading the images in a large numpy array? The answer is: it depends. Is the dataset small enough to fit in your memory? If not, you are forced to use a generator that loads the images in batches and passes each batch to the model. If yes, you either can use a generator to save memory for other things (e.g. the model) or you can load the images into a numpy array so that you can save on computation time (i.e. the overhead of loading images again and again).
121290
1
121333
null
0
52
I am trying to solve binary classification problem using deep neural networks. I want to compare different approaches (model architectures) and I have no hyperparameters which I want to tune. So my question is can I simply use K-fold cross validation here without splitting data to train and test in advance? I mean, I have a dataset and I don't split it to train and test, just take it as it is, do 10-fold splits, for each validation split I compute metrics (let's say accuracy). Then after models have been trained, I aggregate metrics over all splits and compare them. Is this approach valid?
How to properly do a k-fold cross validation?
CC BY-SA 4.0
null
2023-05-02T17:16:23.213
2023-05-11T11:47:23.163
null
null
149508
[ "machine-learning", "deep-learning", "classification", "cross-validation", "binary-classification" ]
This is a reasonable approach, it's basically the traditional use of cross-validation in order to better leverage the entire dataset for both training and testing rather than relying on a single train-test split. The distribution of the performance metrics across the test folds is useful itself, but is often summarized as the mean value. You may be best off explicitly stratifying the folds in order to make sure the models are comparable and learning from similar populations.
k fold cross validation
Try to use the [split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.StratifiedKFold.html#sklearn.model_selection.StratifiedKFold.split) method as `enumerate` argument, instead of `kf` (e.g.:`for i, (train_index, test_index) in enumerate(kf.split(X)):` Hope it helps!
121355
1
121356
null
0
13
I want to cluster my data and show which features were used to define the clusters to show the structure in my data. To explain the use case: Imaging I have data from many products and I want show the variation and structure within my data. As Input features I have an BERT embedding (created with the help of the description of the product), and other categorical and numerical data, as the price, production country, ... So far i have difficulties to find an suitable method, as most methods are not usable to cluster the data (unsupervised) and explain which features contributed to the cluster. First, I was thinking if I could recreate the embedding with all features but this wouldn't help for the explanatory part. So do you have any advice how to approach to this problem?
Best practise XAI: understand features which build up cluster and explain underlaying structure
CC BY-SA 4.0
null
2023-05-05T15:47:08.427
2023-05-05T16:01:05.780
null
null
149333
[ "python", "descriptive-statistics", "explainable-ai" ]
I think you can approach your problem by dividing it into two stages: - Clustering: use whatever method you deem appropriate for your data, e.g. k-means, k-medoids, HDBSCAN, etc. - Explainable classification: train an explainable multiclass classifier on your data, using the clusters as labels. Alternatively, train one explainable classifier for each of the clusters (i.e. positive class is belonging to the cluster, negative class is not belonging), for instance, one logistic regression classifier per cluster. Then, interpret the predictor's influence in each classifier.
Explaination or Description of clusters after clustering
It depends on the clustering technique you use. Since you tagged this post with `k-means` I will assume this is what you are using. Cluster centers should already be somewhat informative for laymans, but since you should be/are scaling this can lose some of it's interpretation. What you could do is assign class labels to each sample based on in what cluster they ended up in. Then you could fit a multi-class decision tree to your data and use the decision rules for interpretation, like 60% of cluster 1 has $x_1 < 0.9$.
121372
1
121409
null
0
37
I have a dataset with quotes from an insurance company. I am trying to create a model to predict how much should the company charge the customer according to the different variables. Two of the variables are related to a second driver. One of them is `driver2_licence_type`and the other one is `dirver2_licence_years`. I am interested in known how to deal with missing values on `driver2_licence_years` in order to perform either a multilinear regression or a decision tree/random forest regression. There are two main cases. ## Case 1 When `driver2_licence_type` is not `NaN` I thought it was safe to fill `driver2_licence_years` with average number of years, because we know there is a second driver but we just don't know how much experience the driver has. However, the price is not likely to follow a linear relation with experience, since very old drivers may be charged more due to loss of abilities. However, I don't know the precise effect. Should I instead do a previous analysis on how years of driving experience explains insurance fees and choose an age that gives the average price? Is it better to try to find what sort of functional relation can be drawn between years and price and then transform the variables accordingly? ## Case 2 When both variables are `NaN` we're assuming that there is no second driver. I originally thought of filling `driver2_licence_years` with zeros, but I am not sure if the effect of having a non-experience driver should be the same as not having a second driver (one could say it is more dangerous having someone with little experience that having no one). Here I am not sure what to do. From the point of view of a decision tree, it maybe be sensible to add another variable that specifies whether there is a second driver or not, and use this to decide whether to look at years or not. Or maybe I should simply have two different models depending on whether there is a second driver or not. What would you suggest in this case? [This answer](https://datascience.stackexchange.com/questions/110189/how-to-deal-with-missing-values-that-are-supposed-to-be-missing) provides some ideas and I have also read about giving it the value -1, but I am still unsure.
How to treat missing values depending on what missing means
CC BY-SA 4.0
null
2023-05-06T11:46:08.490
2023-05-08T20:22:28.853
null
null
144419
[ "regression", "predictive-modeling", "missing-data" ]
My suggestion here would be considering different models for single drivers and additional drivers. Regarding missing values (1) This would be the case for additional drivers When driver2_licence_type is not NAN, but driver2_licence_years has missing values; I would consider average of driver2_licence_years on the basis of driver2_licence_type, driver2_licence_years for the data that does not have missing values. To do this if would group by driver2_licence_type and average driver2_licence_years. I would use this dictionary to compute missing values for driver2_licence_years.If still there are missing values ; I would then consider an overall average. (2) This would be the case for single drivers When both driver2_licence_type and driver2_licence_years has missing values; I would consider dropping these two columns and not considering them at all. If you do not wish to create two different models, I would create an additional variable as you mentioned to flag which one's are single driver and which one's are additional driver (say Single/Additional). The missing values for case (1) would remain same and missing values for case (2) for driver2_licence_type would be None/Other; assuming you would be using encoding methods to convert the variable from categorical to ordinal for your data modelling and missing values for driver2_licence_years can be 0. Your data model will consider the difference between no driver versus a novice driver with 0 yrs experience because of the encoding values for driver2_licence_type for a particular data row. This is the ordinal example that you can use for converting the categorical column to ordinal - [](https://i.stack.imgur.com/t44Tp.png) You can compute additional variables like as shown below and then consider just numerical variables for your model- [](https://i.stack.imgur.com/IKHVC.png)
Missing Values in Data
Various methods are available for fill missing values in data. - Ignore the tuple is the simplest and not effective method. - Fill the missing value manually. - Use a global constant to fill the missing value. - Use attribute mean value to fill missing value. - Use attribute mean for all samples belonging to the same class as the given tuple. - Use most probable value to fill in the missing value (this may be determined with regression, inference tool, or decision tree induction) Reference: Data Mining – Concepts and Techniques - [JIAWEI HAN](http://hanj.cs.illinois.edu/bk1/) & MICHELINE KAMBER, ELSEVIER, 2nd Edition.
121395
1
121524
null
0
30
I'm trying to understand the concept of receptive field better in the context of a practical CNN. All of the online info I can find on receptive field seems to be in a non-practical context so I'll ask my questions using this CNN as an example, which gets ~99% accuracy on MNIST: ``` class MnistNet(nn.Module): def __init__(self): super().__init__() self.conv1 = nn.Conv2d(1, 6, (5, 5)) self.conv2 = nn.Conv2d(6, 16, (5, 5)) self.fc1 = nn.Linear(16 * 4 * 4, 120) self.fc2 = nn.Linear(120, 84) self.fc3 = nn.Linear(84, 10) # end function def forward(self, x): x = F.max_pool2d(F.relu(self.conv1(x)), kernel_size=(2, 2)) x = F.max_pool2d(F.relu(self.conv2(x)), kernel_size=(2, 2)) x = torch.flatten(x, start_dim=1) # flatten, except for the batch dimension x = F.relu(self.fc1(x)) x = F.relu(self.fc2(x)) classificationLogits = self.fc3(x) return classificationLogits # end function # end class ``` Here is how the layers break out: [](https://i.stack.imgur.com/bgTRL.jpg) Here is a drawing I did trying to understand how receptive field applies to this example: [](https://i.stack.imgur.com/LxY2t.jpg) Questions: - Is my drawing correct? If not, where am I going wrong? - When talking about receptive field, is it just understood that we talk about receptive field in the context of a single pixel in the last layer before flattening (so in this case that would be the 2nd max pool layer)? - Ref #2 and my drawing above, in this case would the following statement be correct: > The receptive field of this net is 16 x 16 If this statement is not correct, how could it be changed to be correct? - From the various sources I found on receptive field, it seems to be the general recommendation to choose net parameters so the receptive field covers the entire input image. Ref my drawing above the receptive field here only covers about 2/3 of the input image size, yet this net gets ~99% accuracy on MNIST. Does an undersized receptive field only work here because MNIST is a relatively simple task?
Questions about receptive field in the context of a practical CNN
CC BY-SA 4.0
null
2023-05-08T09:52:44.600
2023-05-14T10:47:48.030
null
null
50921
[ "neural-network", "convolutional-neural-network" ]
Your drawing looks correct (assuming a stride of 1 and no dilation). We can talk about the receptive field for any layer in the CNN - so the receptive field of your 1st conv2d is 5 x 5, and of your first pooling layer is 6 x 6. But a reference to the receptive field of the network is (as you said) the receptive field of the last layer before flattening.
Several fundamental questions about CNN
- You more than likely do not have enough training data for a neural network. - Your class imbalance problem is probably an issue. Instead of using accuracy as a measurement trying some type of F-score. - Batch normalization should be applied between the convolution layer and the activation function. - If you think you have a vanishing or dying activation problem, plot the gradients or the sum of gradients. It'll give you an idea if you're right or not.
121396
1
121399
null
0
74
I am trying to make a caption generator model. (Having problem with shapes) I am getting error as ``` Input to reshape is a tensor with 4096 values, but the requested shape requires a multiple of 6400 ``` Help me out here . here is the model ``` UNITS = 128 IMG_SIZE = 240 BATCH_SIZE = 32 IMG_SHAPE = (IMG_SIZE, IMG_SIZE, 3) max_len = 50 VOCAB_SIZE = tokenizer.vocabulary_size() def get_model(): embdding_layer = Embedding(input_dim = VOCAB_SIZE, output_dim = UNITS, input_length = max_len, mask_zero = True) rnn = LSTM(UNITS, return_sequences=True, return_state=True) # image inputs image_input = Input(shape=IMG_SHAPE) print(image_input.shape) x = resnet_preprocessing(image_input) print('preprocess: ', x.shape) x = resnet(x) print('resnet: ',x.shape) x = Flatten()(x) print('Flatten:',x.shape) # x = layers.MaxPooling2D()(x) # print('pooling:',x.shape) x = Dense(UNITS)(x) print('dense: ',x.shape) # x = x = tf.reshape(x, (-1, 50, 128)) print('reshape: ',x.shape) print('') # text inputs text_input = Input(shape=(max_len,)) print('text_input: ',text_input.shape) i = embdding_layer(text_input) print('embedding: ',i.shape) i, j, k = rnn(i) i, _, _ = rnn(i, initial_state=[j,k]) print('i:', i.shape) # attention between x and i l = Attention()([x, i]) ll = Attention()([i, x]) print('attentions: ',l.shape, ll.shape) # concatnate x and i m = Concatenate()([l, ll ]) print('concat attention: ',m.shape) m = Dense(VOCAB_SIZE)(m) print('dense out: ',m.shape) return keras.Model(inputs = [image_input, text_input], outputs = m) ``` output shapes ``` (None, 240, 240, 3) preprocess: (None, 240, 240, 3) resnet: (None, 8, 8, 2048) Flatten: (None, 131072) dense: (None, 128) reshape: (None, 50, 128) text_input: (None, 50) embedding: (None, 50, 128) i: (None, 50, 128) attentions: (None, 50, 128) (None, 50, 128) concat attention: (None, 50, 256) dense out: (None, 50, 19770) ```
Input to reshape is a tensor with 4096 values, but the requested shape requires a multiple of 6400. (Having problem with shapes)
CC BY-SA 4.0
null
2023-05-08T10:24:41.467
2023-05-08T14:51:45.330
2023-05-08T14:31:13.013
136949
136949
[ "tensorflow", "data-science-model", "reshape" ]
The output of your dense layer is (None, 128), which for batch size of 32 is going to be 4096. The call to reshape says you want a tensor of (None, 50, 128) which is 6400, hence your error. Resnet preprocessing output according to [https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/preprocess_input](https://www.tensorflow.org/api_docs/python/tf/keras/applications/resnet50/preprocess_input) is 4d tensor (batch, three color channels), I'm guessing you want to do some convolution layers and pooling to downsample to (None, 50,128)? Also, having a max pooling layer (currently commented out) after a flatten is going to give you a single value, probably not what you want. I might suggest checking out the image classification example here: [https://www.tensorflow.org/tutorials/images/classification](https://www.tensorflow.org/tutorials/images/classification). hth
ValueError: cannot reshape array of size 136415664 into shape (2734,132,126,1)
You need $2734 \times 132\times 126\times 1=45,471,888$ values in order to reshape into that tensor. Since you have $136,415,664$ values, the reshaping is impossible. If your fourth dimension is $4$, then the reshape will be possible.
121406
1
121417
null
0
29
I am trying to predict stock price of a company, the data is non stationary. Steps I followed - - Analyze the raw data - Determine whether the raw time series data is stationary or not using ADF and KPSS - Applied first differencing and seasonal differencing to make the data stationary - Determine the MA and AR lags using the stationary data by plotting ACF, PACF plots My question is should I pass raw data (non-stationary, from Step 1) to time series model like SARIMA, ARIMA and SARIMAX and use the stationary data(Step 3) to determine MA and AR lag coefficients for the model OR I should pass the stationary data(Step 3) to the time series model like SARIMA, ARIMA, SARIMAX, etc. and use the MA and AR lag coefficients for the model. And then to determine the predicted original time series , I should undo all the transformations that I did in Step 3 to make the time series data stationary. Thank you for your help
How to pass time series data to SARIMA, ARIMA, SARIMAX, etc
CC BY-SA 4.0
null
2023-05-08T16:25:25.483
2023-05-09T12:13:26.837
null
null
144743
[ "machine-learning", "time-series", "arima" ]
In general, you're going to use the data as is to fit the model, but use the data analysis to choose/validate/understand your parameters (p,d,q,P,D,Q, etc). For the most part, it's advisable to do a grid search on your parameters anyway to get the best fitting model, but having some intuitive understanding of where to set the grid search limits will always help.
Choosing the periodicity in a SARIMA model
The choice of periodicity in a ARIMA model is depending on the seasonality in your time-series data. You should plot your time-series and see if you have a "curve"-pattern that occurs regulary. [](https://i.stack.imgur.com/EfD7l.png) Based on the "distance" (hours in your data set) you can make a guess what the right choice of periodicity is.
121412
1
121414
null
0
35
For LLM decoder, how exactly is the K, Q, V for each decoding step? Say my input prompt is "today is a" (good day). At t= 0 (generation step 0): K, Q, V are the projections of the sequence ("today is a") Then say the next token generated is "good" At t= 1(generation step 1): Which one is true: - K, Q, V are the projections of the sequence ("today is a good") OR - K, Q, are the projections of the sequence ("today is a") , V is the projection of sequence ("good")?
Easy question on autoregressive LLM
CC BY-SA 4.0
null
2023-05-09T00:06:56.450
2023-05-09T06:21:22.060
null
null
141647
[ "nlp", "transformer", "language-model" ]
At t=1: K, Q, V are the projections of the sequence "today is a good". However, given that the computations of the first tokens were already done in the previous step, usually, there is some sort of caching mechanism to save their repeated computation in the following steps.
Autoregressive (AR) models constants - Time Series Analysis
You have a slight typo in the notation of a AR(1) model. The correct signature is, $y_t = \beta_0 + \beta_1 \times y_{t-1}+\epsilon_t$ or $y(t) = \beta_0 + \beta_1 \times y(t-1) + \epsilon(t)$, where $y(t)$ and $\epsilon(t)$ are random variables. If $y(t)$ is standard Gaussian you can estimate $\beta_{0,1}$ with a maximum likelihood estimator (MLE). If not you will need a more complex method. You can read more about it in this [article on estimating at ARMA Process](http://www-stat.wharton.upenn.edu/~stine/stat910/lectures/12_est_arma.pdf).
121413
1
121416
null
1
46
I am trying to analyze a weighted network, and I am focusing on identifying the bottleneck nodes and edges coefficients. I have never done that before on Python and I have the following code: ``` import networkx as nx # create a weighted graph G = nx.Graph() G.add_weighted_edges_from([(1, 2, 10), (1, 3, 5), (2, 3, 4), (2, 4, 8), (3, 4, 3)]) # compute the edge bottlenecks edge_bottlenecks = {} for u, v, data in G.edges(data=True): # Compute the minimum weight cut using the Karger algorithm edge_cut_value, edge_partition = nx.minimum_cut(G, u, v, flow_func=nx.algorithms.flow.edmonds_karp) # Compute the bottleneck value for the edge edge_bottleneck = edge_cut_value / data['weight'] edge_bottlenecks[(u, v)] = edge_bottleneck # compute the node bottlenecks node_bottlenecks = {} for node in G.nodes(): # Compute the minimum weight cut using the Karger algorithm node_cut_value, node_partition = nx.minimum_cut(G, node, flow_func=nx.algorithms.flow.edmonds_karp) # Compute the bottleneck value for the node node_bottleneck = node_cut_value / sum([G.edges[u, v]['weight'] for u, v in G.edges(node)]) node_bottlenecks[node] = node_bottleneck print("Edge bottlenecks:", edge_bottlenecks) print("Node bottlenecks:", node_bottlenecks) ``` I get an error message saying "NetworkXUnbounded: Infinite capacity path, flow unbounded above." I can't understand why I am getting such an error message. How can I solve the issue? Thank you.
How to compute edge and node bottleneck coefficients in a weighted directed graph using networkx?
CC BY-SA 4.0
null
2023-05-09T05:30:27.613
2023-05-09T08:08:22.873
null
null
134895
[ "python", "graphs", "networkx" ]
My mistake was how I formulated the graph. First, it is a directed graph, which I should have specified, and I also should have entered the correct syntax when adding edges. I referred to the networkx online resources at [https://pydocs.github.io/p/networkx/2.8.2/api/networkx.algorithms.flow.maxflow.maximum_flow](https://pydocs.github.io/p/networkx/2.8.2/api/networkx.algorithms.flow.maxflow.maximum_flow). Using the example on the mentioned webpage, the correct code to get the bottleneck coefficients for the nodes and edges is: ``` import networkx as nx # Create directed graph G = nx.DiGraph() G.add_edge("x", "a", capacity=3.0) G.add_edge("x", "b", capacity=1.0) G.add_edge("a", "c", capacity=3.0) G.add_edge("b", "c", capacity=5.0) G.add_edge("b", "d", capacity=4.0) G.add_edge("d", "e", capacity=2.0) G.add_edge("c", "y", capacity=2.0) G.add_edge("e", "y", capacity=3.0) # Compute the node bottleneck coefficients node_bottlenecks = {} for node in G.nodes(): # Compute the minimum weight cut using maximum flow node_cut_value, node_partition = nx.minimum_cut(G, "x", "y") # Check if the node has outgoing edges with non-zero capacity outgoing_edges = G.out_edges(node, data=True) if any([edge['capacity'] > 0 for _, _, edge in outgoing_edges]): # Compute the bottleneck value for the node node_bottleneck = node_cut_value / sum([edge['capacity'] for _, _, edge in outgoing_edges]) node_bottlenecks[node] = node_bottleneck # Compute the edge bottleneck coefficients edge_bottlenecks = {} for u, v, data in G.edges(data=True): # Compute the minimum cut using maximum flow edge_cut_value, edge_partition = nx.minimum_cut(G, "x", "y") # Compute the bottleneck value for the edge edge_bottleneck = edge_cut_value / data['capacity'] edge_bottlenecks[(u, v)] = edge_bottleneck print("Node bottleneck coefficients:", node_bottlenecks) print("Edge bottleneck coefficients:", edge_bottlenecks) ``` ```
Large Graphs: NetworkX distributed alternative
Good , old and unsolved question! Distributed processing of large graphs as far as I know (speaking as a graph guy) has 2 different approaches, with the knowledge of Big Data frameworks or without it. [SNAP](http://snap.stanford.edu/) library from Jure Leskovec group at Stanford which is originally in C++ but also has a Python API (please check if you need to use C++ API or Python does the job you want to do). Using snap you can do many things on massive networks without any special knowledge of Big Data technologies. So I would say the easiest one. Using Apache Graphx is wonderful only if you have experience in Scala because there is no Python thing for that. It comes with a large stack of built in algorithms including centrality measures. So the second easiest in case you know Scala. Long time ago when I looked at GraphLab it was commercial. Now I see it goes open source so maybe you know better than me but from my out-dated knowledge I remember that it does not support a wide range of algorithms and if you need an algorithm which is not there it might get complicated to implement. On the other hand it uses Python which is cool. After all please check it again as my knowledge is for 3 years ago. If you are familiar with Big Data frameworks and working with them, [Giraph](http://giraph.apache.org/quick_start.html) and [Gradoop](https://github.com/dbs-leipzig/gradoop) are 2 great options. Both do fantastic jobs but you need to know some Big Data architecture e.g. working with a hadoop platform. ## PS 1) I have used simple NetworkX and multiprocessing to distributedly process DBLP network with 400,000 nodes and it worked well, so you need to know HOW BIG your graph is. 2) After all, I think SNAP library is a handy thing.
121433
1
121434
null
0
36
Suppose say, I have to predict the cost of stock market. I have previous data and I have made it into the following Structure : (Xt-3,Xt-2,Xt-1)--->(Xt=Yt) Now the order of the above data points if I use an LSTM model should be preserved which means the Day1, Day2 and Day3 should be in sequential order. My doubt is I will be having different rows like this. Can I shuffle those for training while preserving the order within each row. Eg : Can I keep the row For 3 days of August before 3 days of July even though those 3 days will be given in sequential order. I am assuming we should as every models considers each data row as a separate training sample and adjusts its weights as per gradient descent so order should not matter even if we shuffle the rows. Am I right? Second doubt : if I have trained my model till May 8, And I need to predict tomorrow (May 11) and my window length is 3 for LSTM Should I predict May 9 and May 10 and then use May 8, may 9 and May 10 value to predict the next day or should I use actual values of May 9 and May 10. I read somewhere you need to retrain to make new forecast. But I dont think it's a compulsion. If I have trained my model till may 8 and then I give it the values of May 8 May 9 and May10 in sequential order, It should give me a forecast right?
2 basic doubts on time series
CC BY-SA 4.0
null
2023-05-10T13:17:26.113
2023-05-10T13:44:58.907
null
null
148562
[ "machine-learning", "deep-learning", "nlp", "time-series", "lstm" ]
Regarding Doubt 1: Yes, if you shuffle those for training the order within each row will be preserved. If you shuffle the rows, that's good, and the LSTM models will consider each row as a separate sample. Regarding Doubt 2: If you train a model up to May 8th, you can build it so that it outputs predictions for May 9th, May 10th, May 11th, etc. You can evaluate the predictions using the actual values for May 9th and 10th or however else you think is good.
Time series with additional information
I'm assuming the displayed time series shows number of jobs submitted per 15 minute interval. ### Categorical features Divide the time series per category. If the jobs can be divided into `type1`, `type2`, `type3` then make a time series for each type and predict each series individually. So `type1`-time series has number of `type1`-jobs per 15 minute interval. ### Continuous features For continues features e.g time-to-do-job you can divide the jobs into categories of `time00`,`time10`, `time20`, `time30` for jobs that take 0-9 minutes, 10-19 minutes, 20-29 minutes etc respectively. As before generate a time-series per division. Depending on how much data you have and how it is distributed you can make more groups or space them differently.
121439
1
121494
null
0
69
We are running A/B tests on web app customers, given a customerId. Each customer will see different web-feature designs. Trying to prevent usage of Feature Flags as its not currently setup yet in our system. Initially we tried Even-Odd on CustomerId number, 50-50% ratio to test Feature 1. Example UserId 4 is even, 7 is odd. However, when testing another Feature 2, doing Even-Odd 50-50% would make , Feature 1 Groups to have a matching group with Feature 2, as they both share Same algorithm. What is another mathematical algorithm method, to run a hash or 50-50% algorithm , so I can differentiate? We will have probably 10 Features to test, so need a way to add a parameter in the FeatureFlag Algorithm, and will track in a Document Table. We are assigning groups with Javascript/Typescript btw. Note: Groups should be steady and not random , eg Even-odd will give a consistent result.
Different Algorithms for 50-50 A/B Testing
CC BY-SA 4.0
null
2023-05-10T21:43:52.937
2023-05-19T10:04:44.610
2023-05-11T18:30:47.340
149742
149742
[ "classification", "clustering", "statistics", "feature-selection", "data" ]
You can reformulate your previous even/odd split as bit testing of the binary representation of the customer ID: for the first feature, you took the bit at the first position (the least significant bit) and assigned the groups according to its value. You can then extend the same approach to define new groups so that you obtain splits that don't correlate with the previous splits: for the nth feature, take the bit at the nth position and assign the groups according to their values. This ensures that the groups are independent for every feature in a deterministic and reproducible way. In Javascript it would be something like this: ``` const groupId = (customerId & (1 << featureNumber)) === 0 ? 0 : 1; ``` Where `<<` is the bit shift operator and `&` is the bitwise and operator, and `featureNumber` is the order of the specific feature you are testing (starting at zero). The result of the bitwise-and is either 0 or (2 << feature_number). `groupId` would be either 0 or 1. This approach, of course, is only valid if the number of customers is enough to fit the bits for all features, that is, at least 2048 ($=2^{11}$) customers. One minor problem with this approach would be that the partitions will probably not lead to an exact 50/50 split, because your number of customers will probably not be an exact power of 2.
Is it OK to use the testing sample to compare algorithms?
Basically, every time you use the results of a train/test split to make decisions about a model- whether that's tuning the hyperparameters of a single model, or choosing the most effective of a number of different models, you cannot infer anything about the performance of the model after making those decisions until you have "frozen" your model and evaluated it on a portion of data that has not been touched. The general concept addressing this issue is called nested cross validation. If you use a train/test split to choose the best parameters for a model, that's fine. But if you want to estimate the performance of that, you need to then evaluate on a second held out set. If you then repeat process for multiple models and choose the best performing one, again, that's fine, but by choosing the best result the value of your performance metric is inherently biased, and you need to validate the entire procedure on yet another held out set to get an unbiased estimate of how your model will perform on unseen data.
121467
1
121469
null
3
127
I want to know about how variational autoencoders work. I am currently working in a company and we want to incorporate variational autoencoders for creating synthetic data. I have questions regarding this method though, is this the only way to generate synthetic or artificial data? Is there a difference between VAE and GANs, is one preferred over the other? I am also not a person with a lot of mathematical background and a bit wary on the implementation of it. Finally, I have gone through many links and videos on the implementation through PyTorch and Tensorflow. Are both similar in implementation? I went through this link: [https://www.youtube.com/watch?v=9zKuYvjFFS8&ab_channel=ArxivInsights](https://www.youtube.com/watch?v=9zKuYvjFFS8&ab_channel=ArxivInsights) However, still not fully grasped a simpler way to implement this technique. Any help with understanding and its implementation would be greatly appreciated.
How does variational autoencoders actually work in comparison to GAN?
CC BY-SA 4.0
null
2023-05-12T06:01:05.803
2023-05-16T02:37:35.647
null
null
138954
[ "deep-learning", "autoencoder", "vae" ]
VAEs were a hot topic some years ago. They were known to generate somewhat blurry images and sometimes suffered from posterior collapse (the decoder part ignores the bottleneck). These problems improved with refinements. Basically, they are normal autoencoders (minimize the difference between the input image and output image) with an extra loss term to force the bottleneck into a normal distribution. GANs became popular also a few years ago. They are known for being difficult to train due to their non-stationary training regime. Also, the quality of the output varies, including suffering the problem of mode collapse (always generating the same image). They consist of two networks: generator and discriminator, where the generator generates images and the discriminator tells if some image is fake (i.e. generated by the generator) or real. The generator learns to generate by training to deceive the discriminator. Nowadays the hot topic is diffusion models. They are the type of models behind the renowned image-generation products Midjourney and DALL-E. They work by adding random noise to an image up to the point they are become only noise, and then learning how to remove that noise back into the image; then, you can generate images directly from noise.
Transform an Autoencoder to a Variational Autoencoder?
Yes. Two changes are required to convert an AE to VAE, which shed light on their differences too. Note that if an already-trained AE is converted to VAE, it requires re-training, because of the following changes in the structure and loss function. Network of AE can be represented as $$x \overbrace{\rightarrow .. \rightarrow y \overset{f}{\rightarrow}}^{\mbox{encoder}} z \overbrace{\rightarrow .. \rightarrow}^{\mbox{decoder}}\hat{x},$$ where - $x$ denotes the input (vector, matrix, etc.) to the network, $\hat{x}$ denotes the output (reconstruction of $x$), - $z$ denotes the latent output that is calculated from its previous layer $y$ as $z=f(y)$. - And $f$, $g$, and $h$ denote non-linear functions such as $f(y) = \mbox{sigmoid}(Wy+B)$, $\mbox{ReLU}$, $\mbox{tanh}$, etc. These two changes are: - Structure: we need to add a layer between $y$ and $z$. This new layer represents mean $\mu=g(y)$ and standard deviation $\sigma=h(y)$ of Gaussian distributions. Both $\mu$ and $\sigma$ must have the same dimension as $z$. Every dimension $d$ of these vectors corresponds to a Gaussian distribution $N(\mu_d, \sigma_d^2)$, from which $z_d$ is sampled. That is, for each input $x$ to the network, we take the corresponding $\mu$ and $\sigma$, then pick a random $\epsilon_d$ from $N(0, 1)$ for every dimension $d$, and finally compute $z=\mu+\sigma \odot \epsilon$, where $\odot$ is element-wise product. As a comparison, $z$ in AE was computed deterministically as $z=f(y)$, now it is computed probabilistically as $z=g(y)+h(y)\odot \epsilon$, i.e. $z$ would be different if $x$ is tried again. The rest of network remains unchanged. Network of VAE can be represented as $$x \overbrace{\rightarrow .. \rightarrow y \overset{g,h}{\rightarrow}(\mu, \sigma) \overset{\mu+\sigma\odot \epsilon}{\rightarrow} }^{\mbox{encoder}} z \overbrace{\rightarrow .. \rightarrow}^{\mbox{decoder}}\hat{x},$$ - Objective function: we want to enforce our assumption (prior) that the distribution of factor $z_d$ is centered around $0$ and has a constant variance (this assumption is equivalent to parameter regularization). To this end, we add a penalty per dimension $d$ that punishes any deviation of latent distribution $q(z_d|x) = N(\mu_d, \sigma_d^2)$$= N(g_d(y), h_d(y)^2)$ from unit Gaussian $p(z_d)=N(0, 1)$. In practice, KL-divergence is used for this penalty. At the end, the loss function of VAE becomes: $$L_{VAE}(x,\hat{x},\mu,\sigma) = L_{AE}(x, \hat{x}) + \overbrace{\frac{1}{2} \sum_{d=1}^{D}(\mu_d^2 + \sigma_d^2 - 2\mbox{log}\sigma_d - 1)}^{KL(q \parallel p)}$$ where $D$ is the dimension of $z$. Side notes - In practice, since $\sigma_d$ can get very close to $0$, $\mbox{log}\sigma_d$ in objective function can explode to large values, so we let the network generate $\sigma'_d = \mbox{log}\sigma_d = h_d(y)$ instead, and then use $\sigma_d = exp(h_d(y))$. This way, both $\sigma_d=exp(h_d(y))$ and $\mbox{log}\sigma_d=h_d(y)$ would be numerically stable. - The name "variational" comes from the fact that we assumed (1) each latent factor $z_d$ is independent of other factors, i.e. we ignore other $(\mu_{d'}, \sigma_{d'})_{d' \neq d}$ when we sample $z_d$, and (2) $z_d$ follows a Gaussian distribution. In other words, $q(z|x)$ is a simplified variation to the true (and probably a more complex) distribution $p(z|x)$.
121475
1
121478
null
4
93
I am working as a data scientist for the past 2 years where I have worked on problems related to binary classification, revenue prediction etc. In the past two years, I have had 2 problems that focused specifically on binary classification with imbalanced data and size of datasets were low. In my first project it was 2977 (77:23) records and 2nd project was 3400 (70:30) records.. though I feel it is not extremely imbalanced but still slight imbalance.. I tried all the approaches that I know to do a best job - Threshold moving, considering various metrics to assess the performance of a model holistically, extensive feature engineering approaches etc... Despite all this, I could never make the minority class precision or recall touch even 70% in the validation data.. So, am not sure whether it is impossible to achieve decent performance, problem is not suitable for prediction on imbalanced datasets or it shows poor performance by a data scientist like me. Whatever tutorial or articles that I read online for imbalanced datasets, also show similar stories only where the performance of minority class is only around 50-60% Meaning, they show without SMOTE, Resampling etc and after applying SMOTE, resampling etc, the performance goes up by few points and reaches 55-63% (just by 2 to 3 points) How do big corporations and hospitals that works on fraud analytics and death likelihood etc do differently to deploy such models in production? Any experience here anyone? Do they also settle for low performance but still go ahead with it as something is better than nothing? Is it even possible to achieve 90% and above for precision, recall and f1 of minority class (which is our class of interest). Can any experts here share some of your views? Ps - whatever model I built earned some revenue to company..but not sure whether it is biz demand or model working...company believes model helped...but due to poor metrics, I don't believe in it though I have been given credit
Are imbalanced data problems solvable?
CC BY-SA 4.0
null
2023-05-12T13:06:15.860
2023-05-13T13:19:58.393
null
null
64876
[ "machine-learning", "classification", "data-mining", "predictive-modeling", "class-imbalance" ]
The issue is not with the imbalance per se. The issue is that your categories are not particularly separable on the available data (or they are but you are not modeling the correct relationship, e.g., needing a quadratic term yet lacking one). When imbalanced categories are easy to distinguish, performance is high. For instance, I see a lot more Honda cars than Ferrari cars (imbalanced classes). Nonetheless, I do not strugle to distinguish between the two, beause they look so different.$^{\dagger}$ In this case, the class imbalance is not an issue, and it is easy to identify the correct car manufacturer just about every time. On the other hand, I see these two "identical" twins about equally often (no imbalance), and I struggle to tell them apart, since they look so similar. In this case, despite the lack of imbalance, I mix up their names all the time and struggle to distinguish between them. Two Cross Validated links are worth reading. [How to know that your machine learning problem is hopeless?](https://stats.stackexchange.com/questions/222179/how-to-know-that-your-machine-learning-problem-is-hopeless) The gist here is that some problems are just hard, such as hoping to predict the toss of a fair coin. [Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?](https://stats.stackexchange.com/questions/357466/are-unbalanced-datasets-problematic-and-how-does-oversampling-purport-to-he) The gist here is that imbalanced problems are not so inherently different from balanced problems. $^{\dagger}$I am reminded of a quote from the movie My Cousin Vinny, which is a possible spoiler. > They are discussing if getaway vehicles could be mistaken for each other: "One was the Corvette, which could never be confused with the Buick Skylark."
The effect of imbalanced distribution of data
The best way forward here depends highly on the real life question you try to answer. Let's say you want to make a medical diagnosis: 'Sick with exotic Illness X' or 'Not sick with exotic illness X' in this case you might want to catch all instances of being sick as a warning sign and could live with 'false positives'. Conversely your algorithm will be used to predict 'customers likely to cancel soon', in this case it would not be a good idea to proactively talk to 'false positives' i.e. customers who did not plan to cancel about why they might be dissatisfied. In either cases your training set and indeed reality might be severely unbalanced but the cost and consequences of this varies. In the first case I would recommend using balancing methods (like the aforementioned Under-/Oversampling, etc.) to improve recognition of the minority class while in the second case that might be unnecessary. In any case I would practically go on to do the following: Include balancing/sampling in your beauty contest of algorithms and parameters and check the impact on the accuracy of predicting the test set (which is left unbalanced as in the original). This will simply show you whether the inherent bias of the training set is problematic for your real world case (i.e. produces models that never identify the minority class) or not.
121477
1
121594
null
1
44
I am working with [DEAP dataset](https://www.eecs.qmul.ac.uk/mmv/datasets/deap/) and [MNE](https://mne.tools/stable/index.html). I need to find eye-blinks using [find_eog_events()](https://mne.tools/stable/generated/mne.preprocessing.find_eog_events.html#mne.preprocessing.find_eog_events) function. As you can see in the documentation, I am supposed to specify a parameter called `thresh`: > Threshold to trigger the detection of an EOG event. This controls the thresholding of the underlying peak-finding algorithm. Larger values mean that fewer peaks (i.e., fewer EOG events) will be detected. If None, use the default of (max(eog) - min(eog)) / 4, with eog being the filtered EOG signal. I tried to use the default value (i.e., `(max(eog)-min(eog))/4`) but unfortunately many non-blinks artifacts are detected (wrongly). Can you give me an advice?
How should I set threshold parameter in find_eog_events() function?
CC BY-SA 4.0
null
2023-05-12T16:09:54.810
2023-05-17T17:26:03.333
null
null
128575
[ "python" ]
I worked on the same dataset some days ago. You can try: - use the same data lenght of the video trials (1 minute for trial) - using vertical EOGs or Fp1 and Fp2 of EEGs data - using as thresh: (max(data_filtered) - min(data_filtered))/2. data_filtered to band pass 1 - 10 hz Bye
Find threshold in large dataset
Somehow you have to come up with some sort of numerical classification system for your movie genres. I would start by creating a relationship tree between genres. For example action movies and then action movies with comedy and then action movies with comedy with animation etc. You could develop a whole Forest of trees that relate movie genres to one another. You can then test the genres paths of individuals to compare.
121493
1
121498
null
0
40
does training for a large number of epochs lead to overfitting? I am concerned about this as I am getting an accuracy of nearly 1 on val and training dataset when I am training for 50 epochs
can training for too long lead to overfitting? I am not sure about the specifics of this
CC BY-SA 4.0
null
2023-05-13T07:55:11.467
2023-05-13T11:45:24.433
null
null
149782
[ "machine-learning", "deep-learning" ]
Yes training for large number of epochs will lead to overfitting. This is because after a point the model starts learning noise from the training set. After a certain number of epochs, majority of what has to be learnt is already learnt and if you continue past that point, the noise present in the dataset starts affecting the model. Based on your question, you think 50 epochs is not that many but how many epochs to set also depends on your dataset and what model you are using. If you have a large enough dataset, 50 epochs can be too much. Similarly if you have a small dataset, 50 epochs might not be enough. On the same note, if you have a neural network with a lot of parameters, (example gpt2 or 3) you don't need that many epochs as the model is large and complex enough to learn from the data in just a few epochs. But if you have a relatively smaller neural network then you might need to increase the epochs so that the model can have sufficient iterations to learn from the data. I would advise using learning curves to visualize how your model is performing for certain number of epochs. `sklearn` has a library `learning_curves` I think, for that purpose.
Is this overfitting?
So, overfitting occurs when the model is complex enough to fit very well with examples observed in the training data, such that the model is not able to generalise well over unseen instances (validation data). Therefore, for overfitting, we expect the training F1 score to continually decrease, whilst the valid_1 F1 score increases. Here, the plot shows that both training and validation F1-score has stabilised over epochs/iterations. Arguably though, we can see that valid_1 marginally increases as the training F1 score decreases. This can be indicative of (very mild) overfitting.
121507
1
121512
null
0
35
I am training a CNN for multiclass image classification into 4 images , what accuracy metric should i use from Keras. My labels are not one hot encoded as I am trying to predict probability of different images.
which Keras accuracy metric for multiclass classification
CC BY-SA 4.0
null
2023-05-13T18:31:22.447
2023-05-14T09:28:43.127
null
null
149782
[ "keras", "image-classification", "accuracy" ]
For multiclass classification you can simply use a categorical cross entropy loss function. Depending on whether or not the values are one-hot encoded you would have to use either the sparse categorical cross entropy loss or the normal categorical cross entropy loss. Another option is f1_score which is a combination of precision_score and recall_score. Below is an implementation in keras which you can use: ``` from tensorflow.keras import backend as K def f1(y_true, y_pred): def recall_m(y_true, y_pred): TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) Positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = TP / (Positives+K.epsilon()) return recall def precision_m(y_true, y_pred): TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) Pred_Positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = TP / (Pred_Positives+K.epsilon()) return precision precision, recall = precision_m(y_true, y_pred), recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) ```
Which Keras metric for multiclass classification
One option is to implement F1 score in Keras: ``` from tensorflow.keras import backend as K def f1(y_true, y_pred): def recall_m(y_true, y_pred): TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) Positives = K.sum(K.round(K.clip(y_true, 0, 1))) recall = TP / (Positives+K.epsilon()) return recall def precision_m(y_true, y_pred): TP = K.sum(K.round(K.clip(y_true * y_pred, 0, 1))) Pred_Positives = K.sum(K.round(K.clip(y_pred, 0, 1))) precision = TP / (Pred_Positives+K.epsilon()) return precision precision, recall = precision_m(y_true, y_pred), recall_m(y_true, y_pred) return 2*((precision*recall)/(precision+recall+K.epsilon())) ```
121528
1
121530
null
1
66
I created a conda environment previously and it worked fine with python and tensorflow. At that stage I used anaconda. On a fresh install I am using miniconda since I now understand the conda commands better. After installing python, packages like numpy and scipy (nothing exotic) and tensorflow I can run my previous simple neural network code. The package versions are all listed as compatible on the tensorflow site. I also installed pillow from conda. It is visible using conda list, but python returns "No module named 'pillow'". My code to preprocess images no longer works so I need to fix this. So I'm trying to create a new environment to work in. When I try to install python 3.8.0 in the new environment I get "An unexpected error has occurred. Conda has prepared the above report." ``` conda install python==3.8.0 > An unexpected error has occurred. Conda has prepared the above report. > > If submitted, this report will be used by core maintainers to improve > future releases of conda. Would you like conda to send this report to > the core maintainers? [y/N]: y Upload successful. ``` Should I purge conda and start from scratch?
pillow cannot import / conda unexpected error
CC BY-SA 4.0
null
2023-05-14T14:05:02.153
2023-05-18T10:07:40.750
2023-05-14T14:55:18.473
143103
143103
[ "python", "deep-learning", "tensorflow", "anaconda", "conda" ]
Its a bit drastic. Could you try ... ``` conda update --strict-channel-priority --all conda update --all conda update anaconda # this could be removed 'cause you're using miniconda conda update conda conda activate myenv conda install python=3.8.0 ``` I always thought a single `=` was used. If that fails I'd delete the environment and create a new one ``` conda remove -n myenv --all conda create -n newenv python=3.8 conda activate newenv ```
Getting TypeError: expected bytes, Descriptor found while importing tensorflow
I resolved the issue by creating new environment with tensorflow by the below two commands: ``` conda create -n tensorflow_env tensorflow conda activate tensorflow_env ```
121538
1
121556
null
1
108
I am currently working on a binary classification problem using imbalanced data. The algorithm that I am using is random forest. The problem is about predicting whether each sales project will meet its target or not. For example, a sales manager could have multiple sales project running under him. We need ML to predict what is the likelihood that each project will meet its target agreed during start of the project. Each projects runs for 3 to 5 year cycle. So, every year there is a specific target to be met. Based on the year currently the project is in, we would like to know whether project will meet its target upto that specific year. If the project is in 3rd year, we need to find the likelihood for the project to meet its 1st 3 years target (1st, 2nd and 3rd year). So, now my question is on including two columns/feature which contains the value of how much target achieved/units purchased till this time point (3rd year) as well as "target set at the start of the project". Is it okay to include the feature of "total target achieved/units purchased as on date" and "target set at the start of the project"? or it is data leakage or considered biasing the model? we have that target achieved/units purchased as on date info for every project which is updated frequently based on the purchase made. Every project that we are trying to predict the likelihood, will either have achieved 0 % of the target or 10% of the target or 20% of the target or exceeded the target up to that time point etc. So, we have this info for all records. And the output_label column is marked as 1 if they exceed the target and marked as 0 if they have not met the target. So, we feed the model the target set (ex:1000 units should be bought) for a project and also how much they have achieved as of now (ex: 200 units bought already) along with other variables. So, do you think this is a data leakage or considered biasing the model? can I use these two features or not? As I have the data for these two features at the start of my analysis itself. Meaning, if I am extracting data/building model today, I can find out what is the latest value for "target achieved as on date" yesterday and "target set at the start of the project" (using which labels are derived) But what if ML model easily captures the relationship (if target achieved >= target set - high likelihood to meet the target else low likelihood to meet the target). So, in this case do we need ML at all in the first place? Am confused. Of course, along with these features, am trying to few more input variables as well based on historical data. Can you guide me on whether incorporating these two features - `target set` and `target achieved as of date` is okay? But yes, including these features results in better performance of the model. while these two features majorly drive the prediction to 87% of f1 in test data, if I include my additional features, they take upto 93% for f1 in test data. If I exclude these two features, f1 is about 55-60% for minority class. But one thing, I found out was that these two columns are not heavily correlated within themselves and also with the target. So, am not sure how is prediction performance being increased so heavily after these two features Also, important point to note is that my output variable is computed using a formula/rule that involves these two features. However, when I validated the performance on the test data, I don't see any signs of overfitting or drop in performance. But yes, these two features drive the prediction all alone contributing to around 87% of f1 score where as other 3-4 predictors add another 5 points. So, am I good to use these features in model building despite they being used to create rule-based label? I don't let the model know the exact formula/rule. So, what do you think?
what qualifies as a data leakage?
CC BY-SA 4.0
null
2023-05-15T01:12:00.417
2023-05-16T10:27:48.000
2023-05-16T10:27:48.000
64876
64876
[ "machine-learning", "classification", "data-mining", "data-analysis", "data-leakage" ]
Data leakage occurs in cases when you train a model with data that is not available for future testing/inference; or when you use same piece of data for training, and then for validation and/or testing. This short [Kaggle article](https://www.kaggle.com/code/alexisbcook/data-leakage) sums it up nicely. If you have a feature (e.g. `target_year_x`) that somehow quantifies how much of the target goals are currently at year `x` achieved, I fear that this could introduce bias in your model, and may technically be data leakage. High values for that feature indicate that the project is close to meeting its goals, and is more likely to meet its target; thus the model would learn (the very obvious thing) that high values for `target_year_x` are highly predictive for the projects' success. My suggestion is to maybe try multiple models, i.e., one model to predict success in first year, one in second, etc. Or, separate model for separate project phases, if you can somehow logically split the projects. If you try that, be careful not to include features that relate to latter phases for the earlier models (e.g., don't include features that provide information about the projects' second year performance, for the model that predicts in the first year). Or, as the other answer by Brian Spiering suggests, which is also a good option IMO, you might want to consider to frame it as a time series prediction problem if you need multiple chronological predictions per project, rather than a binary classification one.
Need help understanding data leakage
- split the data into train and test - fit your imputer based on the train data set (use just fit) - use the fitted imputer and fill missing value in the training dataset (use transform) - Train you decision tree based on the training dataset Now you are done with the training step. start testing a follows - Use imputer trained in step 2 and transform function to replace missing values in the test dataset - Use the trained decision tree for test prediction and evaluating the performance of your model on the unseen test dataset The data should be split at the beginning. As the name of "unseen" shows, we should not use any information from the test dataset when we are training the model; otherwise, it is data leakage.
121542
1
121545
null
0
20
In transformer network ([Vaswani et al., 2017](https://arxiv.org/abs/1706.03762)), the feedforward networks have equation: $\mathrm{FNN}(x) = \max(0, xW_1 + b_1) W_2 + b_2$ where $x \in \mathbb{R}^{n \times d_\mathrm{model}}$, $W_1 \in\mathbb{R}^{d_\mathrm{model} \times d_{ff}}$, $W_2 \in\mathbb{R}^{d_{ff} \times d_\mathrm{model}}$. We know that the biases $b_1$ and $b_2$ are vectors. But, for the equation to work the shape of $b_1$ and $b_2$ must agree, i.e., $b_1 \in\mathbb{R}^{n \times d_{ff}}$ and $b_2 \in\mathbb{R}^{n \times d_\mathrm{model}}$. My question: is it true that $b_1 = \begin{bmatrix} (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}}\\ (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}} \\ \vdots & \vdots & & \vdots \\ (b_1)_{1} & (b_1)_{2} & \dots & (b_1)_{d_{ff}} \end{bmatrix}$ and $b_2 = \begin{bmatrix} (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}}\\ (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}} \\ \vdots & \vdots & & \vdots \\ (b_2)_{1} & (b_2)_{2} & \dots & (b_2)_{d_\mathrm{model}} \end{bmatrix}$ ?
Shape of biases in Transformer's Feedforward Network
CC BY-SA 4.0
null
2023-05-15T08:07:04.157
2023-05-15T09:22:24.040
null
null
149431
[ "neural-network" ]
Well, yes and no. This is a position-wise feed-forward network. $x \in \mathbb{R}^{n\times d_{model}}$, where $n$ is the sequence length. When we apply the matrix multiplication and bias additions, we do so for each individual position. Therefore, the actual multiplication is a vector of dimensionality $1\times d_{model}$ by the $W_1$ matrix. In actual implementation terms, we obtain the result for the $n$ vectors with a single matrix multiplication. The bias is a single vector that is broadcasted in the addition operation. Broadcasting was introduced by numpy and then adopted by deep learning frameworks. [It is defined](https://numpy.org/doc/stable/user/basics.broadcasting.html) as: > The term broadcasting describes how NumPy treats arrays with different shapes during arithmetic operations. Subject to certain constraints, the smaller array is “broadcast” across the larger array so that they have compatible shapes. Broadcasting provides a means of vectorizing array operations so that looping occurs in C instead of Python. It does this without making needless copies of data and usually leads to efficient algorithm implementations. In this case, broadcasting is equivalent to repeating the bias vector $n$ times. This does not mean that the actual vector is $n \times d_{ff}$, but that we apply the addition to each of the $n$ positions.
What is the feedforward network in a transformer trained on?
Let's take the common translation task which transformers can be used for as an example: If you would like to translate English to German one example of your training data could be ("the cat is black", "die Katze ist schwarz"). In this case your target is simply the German sentence "die Katze ist schwarz" (which is of course not processed as a string but using embeddings incl. positional information). This is what you calculate your loss on, run backprop on, and derive the gradients as well as weight updates from. Accordingly, you can think of the light blue feed forward layers of a transformer [](https://i.stack.imgur.com/9YU8q.jpg) as a hidden layer in regular feed forward network. Just as for a regular hidden layer its parameters are updated by running backprop based on transformer $loss(output,target)$ with target being the translated sentence.
121548
1
121633
null
0
28
I want to make an RNN that has for example more hidden layers or layer normalization. I know that is it possible to make a custom RNN by subclassing nn.module, but with this approach is it not possible to do efficient batch processing with a PackedSequence object (with variable length sequences) the same way and with the same efficiency as torch.nn.RNN. I thought maybe the solution could be to subclass nn.RNN, but I don't know how to do that.
How to make an RNN model in PyTorch that has a custom hidden layer(s) and that is compatible with PackedSequence
CC BY-SA 4.0
null
2023-05-15T12:53:03.943
2023-05-19T09:42:43.183
null
null
149882
[ "machine-learning", "rnn", "pytorch" ]
Assuming you're using python it is possible to do (relatively) efficient batch processing with a `PackedSequence` object, here is some example code; ``` import torch import torch.nn as nn import torch.nn.functional as F from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence class CustomRNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, bidirectional=False): super(CustomRNN, self).__init__() self.num_layers = num_layers self.bidirectional = bidirectional self.rnn = nn.RNN(input_size, hidden_size, num_layers, bidirectional=bidirectional, batch_first=True) self.layer_norm = nn.LayerNorm(hidden_size * 2 if bidirectional else hidden_size) def forward(self, x, lengths): packed_seq = pack_padded_sequence(x, lengths, batch_first=True, enforce_sorted=False) output, hidden = self.rnn(packed_seq) output, _ = pad_packed_sequence(output, batch_first=True) output = self.layer_norm(output) return output, hidden ``` Here, CustomRNN takes in the `input_size`, `hidden_size`, `num_layer`s, and `bidirectional` parameters just like nn.RNN. In the forward method, the input sequence `x` and corresponding lengths are first packed into a `PackedSequence` object using `pack_padded_sequence`. The packed sequence is then passed through the RNN and the output is obtained. The output is then unpacked using `pad_packed_sequence` and layer normalization is applied to the output using nn.LayerNorm. Finally, the normalized output and hidden state are returned. With this implementation, you can efficiently process variable-length sequences using a `PackedSequence` object while also incorporating layer normalization into the RNN.
RNN model with 3 hidden layers
The shape should be 3d array --> (samples, timesteps, features) It needs to add return_sequences=True in first two RNN layers. ``` model=Sequential() model.add(SimpleRNN(1000,input_shape=(1,320*15),activation='relu', return_sequences=True)) model.add(SimpleRNN(1000, return_sequences=True)) model.add(SimpleRNN(1000)) ```
121577
1
121578
null
2
74
I'm working on a Classification problem as a side project and I'm receiving results contrary to what I'd expect. With 100,000 records, each with 7 components for X, the model is performing much better with 70% of the data being used to test, rather than what I'd expect: 70% training split to work better. Has anyone had this before or know why this could be? I'm wondering if maybe the large size of the data is worsening the model somehow.
Random Forest Classification model performing much better with 70:30 TEST:TRAIN rather than the opposite
CC BY-SA 4.0
null
2023-05-16T15:49:54.960
2023-05-16T17:57:34.340
null
null
149919
[ "machine-learning", "classification", "random-forest" ]
Is this data is imbalanced, like 95% target A versus 5% target B? If it is I would suggest that the test set sample was a poor representation of the under represented target to be classified. Could you augment the data set to increase its size, e.g. if it's a time-series use other data points, image recognition rotate or shift the hues, contrast, orientation? Dealing imbalance has alternative solutions if thats the issue. --- From the comments: The issue is 92% for 30:70% train-test split and 80% for 70:30% train-test split. You could simply say 80% is good enough I'll proceed with the orthodox 70:30 split. If you are proceeding with 30:70 split you would need to be clear about that, if it's a manuscript the reviewer would likely return it. Personally, I don't think it's cool. I get the impression that 3 of the targets under classification have approximately equal proportions (just guessing). The issue is whether there is a minority part of the classification which is getting misrepresented in the testing split. There are two approaches I would use (as a data scientist): - Reduce the problem to the 3 majority categories and see if the discrepancy continues between 30:70 and 70:30 - Augment the data and use a standard 70:30 split, however now the 30% is more like the original 70% due to augmentation. My suspicion is in point 1 is the discrepancy will disappear, thus you've identified the problem and can consider whether its worth moving to point 2. If that was correct it's what does the fourth catagory represent and how important this is to you. For example in cancer that 4th category (the smallest) could be really important because it carries the highest mortality. If it's just not important - its a minority variant that no-one cares about and you just state the classification for this category needs further development (which might never happen). Its area specific. In my problems I can't discount a minority classification, but thats because it might become the variant that takes over the world and I've just missed it (I do evolutionary selection). In your problem and I get the impression in many business related analytics you can.
Random Forest Classifier gives very high accuracy on test set - overfitting?
`rand_forest.fit(X,y)` Why are you using the whole data set for training? You are using the test set for training then evaluate the performance on it again? In your code, I didn't see you actually used the training set you created.
121583
1
121585
null
3
59
I am doing a binary classification task with Keras and my model directly outputs either 0 or 1. Typically I compile the model like something below: ``` model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), metrics=['accuracy']) ``` The dataset I have is imbalanced, only ~10% of samples are positive. I am aware that in this case accuracy is not a good metric and I can see a 90% accuracy even if the model is the same as random guessing. The problem is, seems Keras does not provide F1 score as an alternative in its `metrics` parameter of `compile()` method (the list of method Keras provides is [here](https://www.tensorflow.org/api_docs/python/tf/keras/metrics)). What else can I pass to the `metrics` parameter so that I can have a better understanding of the model's performance during the training progress? EDIT1 To make the question more complete, I included a sample model definition below: ``` input_shape = image_size + (3,) num_classes = 2 model = Sequential([ layers.Rescaling(1./255, input_shape=input_shape), layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'), layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'), layers.MaxPooling2D(pool_size=(2, 2),strides=(2, 2)), layers.Dense(4096, activation='relu'), layers.Dense(4096, activation='relu'), layers.Dense(num_classes, activation='softmax') ]) model.build((None,) + input_shape) optimizer = keras.optimizers.Adam() model.compile( optimizer=optimizer, loss=tf.keras.losses.SparseCategoricalCrossentropy(), metrics=['accuracy'] ) ``` EDIT2 Following @noe's answer and some posts [here](https://stackoverflow.com/questions/48851558/tensorflow-estimator-valueerror-logits-and-labels-must-have-the-same-shape), I can make AUC work now. A few parameters must be set correctly: - layers.Dense(1, activation='sigmoid') - loss=tf.keras.losses.BinaryCrossentropy(), - metrics=['AUC'] Among them, `layers.Dense(1, activation='sigmoid')` seems to be most critical, we need to use `sigmoid()` to convert the output to a range of (0, 1) to make AUC work. ``` input_shape = image_size + (3,) num_classes = 2 model = Sequential([ layers.Rescaling(1./255, input_shape=input_shape), layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'), layers.Conv2D(filters=64, kernel_size=(3, 3), activation='relu'), layers.MaxPooling2D(pool_size=(2, 2),strides=(2, 2)), layers.Dense(4096, activation='relu'), layers.Dense(4096, activation='relu'), layers.Dense(1, activation='sigmoid') ]) model.build((None,) + input_shape) optimizer = keras.optimizers.Adam() model.compile( optimizer=optimizer, loss=tf.keras.losses.BinaryCrossentropy(), metrics=['AUC'] ) ```
Which metric to use for imbalanced data in TensorFlow/Keras
CC BY-SA 4.0
null
2023-05-17T01:58:20.377
2023-06-01T03:11:54.767
2023-06-01T03:11:54.767
149935
149935
[ "keras", "tensorflow", "metric" ]
AUC: Area Under the ROC Curve. Check some references: [1](https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc), [2](https://stats.stackexchange.com/q/260164/40048), [3](https://datascience.stackexchange.com/a/94654/14675). AUC = 0.5 means the classifier is random guessing. AUC = 1 is the perfect classifier.
How to do imbalanced classification in deep learning (tensorflow, RNN)?
This has already been answered both in [stackoverflow](https://stackoverflow.com/questions/35155655/loss-function-for-class-imbalanced-binary-classifier-in-tensor-flow) and [crossvalidated](https://stats.stackexchange.com/questions/197273/class-balancing-in-deep-neural-network). The suggestion in both cases was to add class weights to the loss function, by multiplying logits: loss(x, class) = weights[class] * (-x[class] + log(\sum_j exp(x[j]))) For example, in tensorflow you could do: ratio = 31.0 / (500.0 + 31.0) class_weight = tf.constant([ratio, 1.0 - ratio]) logits = ... # shape [batch_size, 2] weighted_logits = tf.mul(logits, class_weight) # shape [batch_size, 2] xent = tf.nn.softmax_cross_entropy_with_logits( weighted_logits, labels, name="xent_raw")
121586
1
121592
null
0
40
I'm trying to find conferences that have applied data science papers published. I'm only interested in top ranked conferences. And I notice quite a number of them are quite theoretical, e.g. IJAI, NIPS, etc. Thanks
Where can I find the applied data science research papers?
CC BY-SA 4.0
null
2023-05-17T05:11:30.340
2023-05-17T09:32:49.577
null
null
121222
[ "machine-learning", "research", "artificial-intelligence" ]
If you're looking for conferences that focus on applied data science and have a high ranking, there are several options you can consider. While it's true that some conferences may have a more theoretical emphasis, there are also reputable conferences that highlight practical and applied aspects of data science. Here are a few suggestions: - ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD): KDD is one of the premier conferences in data mining and knowledge discovery. It covers a wide range of topics including applied data science, machine learning, data mining, and big data analytics. - IEEE International Conference on Data Mining (ICDM): ICDM is another top conference in the field of data mining. It brings together researchers and practitioners to discuss the latest advancements in data mining and its applications. - International Conference on Machine Learning (ICML): While ICML does have a theoretical focus, it also accepts and features applied data science papers. It is a leading conference in the machine learning community and covers a broad range of topics. - International Joint Conference on Artificial Intelligence (IJCAI): IJCAI is a prestigious conference in the field of artificial intelligence. While it does include theoretical research, it also accepts and showcases applied data science papers. - International Conference on Data Science and Advanced Analytics (DSAA): DSAA focuses specifically on data science and advanced analytics. It welcomes submissions related to practical applications, data-driven solutions, and real-world case studies. These conferences are known for their rigorous review process and attract top researchers and practitioners in the field. Keep in mind that acceptance rates for these conferences can be highly competitive, so ensure that your work aligns well with the conference's scope and requirements. Additionally, you can also explore domain-specific conferences in areas such as healthcare, finance, or industry-specific data science conferences. These conferences often highlight applied research and real-world applications within their respective domains. Remember to check the websites of these conferences for the most up-to-date information on submission deadlines, conference dates, and paper requirements.
Finding research papers for a dataset
Super important question. The reason is that this is not the original source. If you go to the data -> meta data -> sources, you can see the source is: `JING TENG, January 18, 2019, "SEER Breast Cancer Data", IEEE Dataport, doi: https://dx.doi.org/10.21227/a9qy-ph35. https://ieee-dataport.org/open-access/seer-breast-cancer-data` Then searching google datasets for the DOI number, we can click through onto the google scholar link to get the following: [https://scholar.google.com/scholar?q=%22ieee%20dataport%20org%20open%20access%20seer%20breast%20cancer%20data%22](https://scholar.google.com/scholar?q=%22ieee%20dataport%20org%20open%20access%20seer%20breast%20cancer%20data%22)
121666
1
121667
null
0
42
Upon Googling "Maxpool ReLU order" or similar, I've found many people saying this order does not effect the result, i.e.: ``` MaxPool(Relu(x)) = Relu(MaxPool(x)) ``` Here are a small number of examples of people saying this: [https://stackoverflow.com/questions/35543428/activation-function-after-pooling-layer-or-convolutional-layer](https://stackoverflow.com/questions/35543428/activation-function-after-pooling-layer-or-convolutional-layer) [https://github.com/tensorflow/tensorflow/issues/3180](https://github.com/tensorflow/tensorflow/issues/3180) [https://www.quora.com/In-most-papers-I-read-the-CNN-order-is-convolution-relu-max-pooling-So-can-I-change-the-order-to-become-convolution-max-pooling-relu](https://www.quora.com/In-most-papers-I-read-the-CNN-order-is-convolution-relu-max-pooling-So-can-I-change-the-order-to-become-convolution-max-pooling-relu) [https://towardsdatascience.com/convolution-neural-networks-a-beginners-guide-implementing-a-mnist-hand-written-digit-8aa60330d022](https://towardsdatascience.com/convolution-neural-networks-a-beginners-guide-implementing-a-mnist-hand-written-digit-8aa60330d022) To be clear, I'm completely aware that there could be a slight speed difference, but what I'm asking about here is the computation result, not the speed. For example, consider the following: [](https://i.stack.imgur.com/j3U58.jpg) How can the general consensus be that the ReLU/MaxPool order does not effect the computation result when it's easy to come up with a quick example where is does appear to effect the computation result? For what it's worth, ChatGPT seems to go against the general consensus: [](https://i.stack.imgur.com/vMWOr.png)
Why do people say ReLU vs MaxPool order does not change the result?
CC BY-SA 4.0
null
2023-05-20T19:40:31.840
2023-05-20T20:10:44.240
null
null
50921
[ "neural-network" ]
- ChatGPT often speaks bullshit. Do not rely on it. - Your example is wrong. On the second computation, you are computing the absolute value, not ReLU: ReLU(-5) = 0 and ReLU(-3) = 0. The result is 2, which is the same as the first computation. - $\max(ReLU(x_1,...,x_n)) = ReLU(\max(x_1,...,x_n))$
Is Maxout the same as max pooling?
They are almost identical: > The second key reason that maxout performs well is that it improves the bagging style training phase of dropout. Note that the arguments in section 7 motivating the use of maxout also apply equally to rectified linear units (Salinas & Abbott, 1996; Hahnloser, 1998; Glorot et al., 2011). The only difference between maxout and max pooling over a set of rectified linear units is that maxout does not include a 0 in the max. Source: [Maxout Networks](http://arxiv.org/pdf/1302.4389v4.pdf).
121672
1
121674
null
0
33
I have a dataframe with different `dtypes` like int, float, object, datatime etc. I am performing `data cleaning`, to list or find duplicate column names in the dataframe. Duplicate criteria as below: - same column names or - columns having same data values I tried using transpose approach `df.T.duplicated()` to list duplicate column names, But seems slow for big dataframe. I come to know we can use `pivot` or `pivot_table` or `corr` to list duplicate column names. Can someone explain how to use and interpret it Or Is there any other way to do it?
Pandas: To find duplicate columns
CC BY-SA 4.0
null
2023-05-21T11:00:02.427
2023-05-21T11:49:29.900
null
null
148603
[ "pandas", "data-cleaning" ]
To list duplicate columns by name in a Pandas DataFrame, you can call the [duplicated](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.duplicated.html) method on the `.columns` property: `df.columns.duplicated()` This should be rather fast, unless you have an enormous number of columns in your data. Finding duplicate columns by values is a bit tricky, and definitely slower. I wouldn't use any of `pivot`, `pivot_table` or `corr`, as they are slow as well for large data sets. Of the three, `corr` would be the most straightforward to detect duplicate columns - two identical columns would have a correlation of one. I've found this StackOverflow question to contain some ideas, and I think that this [answer](https://stackoverflow.com/a/32961145/15052008) is what you are looking for.
Finding the duplicate values between all columns and sort in new column with Pandas?
Given your input data is saved in a variable `df`, I count the values which occur in all 4 unique columns as follows: ``` import pandas as pd import numpy as np output = ( df .melt() .drop_duplicates() .groupby("value") .agg(count=("value", "count")) .reset_index() ) output["SIM"] = np.where(output["count"] == 4, "SIM", "NON-SIM") output = output.pivot(columns="similarity", values="value") print(output) similarity NON-SIM SIM 0 NaN a 1 b NaN 2 c NaN 3 NaN d 4 dc NaN 5 dx NaN 6 f NaN 7 g NaN 8 NaN s 9 t NaN 10 w NaN 11 x NaN 12 y NaN ```
121683
1
121684
null
0
23
Can I consider 20% / 80% as a balanced dataset? My target variable ratio is 62% and 37%. I hope It is a balanced dataset. Please let me know if I am wrong. However, I would like to know, what is the minimum ratio to consider the data set as balanced for the classification algorithm?
What is the minimum ratio to consider the data set as balanced for the classification algorithm?
CC BY-SA 4.0
null
2023-05-22T07:15:19.383
2023-05-22T07:24:25.280
null
null
150091
[ "machine-learning", "python", "classification", "pandas" ]
Here's a [table](https://developers.google.com/machine-learning/data-prep/construct/sampling-splitting/imbalanced-data) for 'degree of imbalance' for a binary classification problem. If you have multiple classes in your data set, I'd assume that if any of the classes deviates with a similar degree as explained in that link, you have an imbalanced data set (or, at the very least, imbalanced representation of that minority class): |Degree of imbalance |Proportion of Minority Class | |-------------------|----------------------------| |Mild |20-40% of the data set | |Moderate |1-20% of the data set | |Extreme |<1% of the data set |
Does classification of a balanced data-set lead to any problem?
Actually, I guess it highly depends on the real data-set and its distribution. I guess the paper has referred to that is that on occasions that the distribution of each class varies, your model won't work well because of changing the distribution of each class. In cases like a disease prediction where the number of each class varies for different places, a model that is trained in the U.S won't work in African countries at all. The reason is that the distribution of classes has been changed. So in such cases that usually the negative and positive classes are not balanced in practice, balancing them will cause the problem of distribution changes. On these occasions, people usually use the real data-set which is not balanced and use `F1` score for evaluation.
121694
1
121695
null
2
60
I am working with a dataset that comes in with nonsense field names in the first row, with the actual field names in the second row. Currently I'm using this script: ``` import pandas as pd import matplotlib.pyplot as plt df = pd.read_csv('CAR_551.csv') df1 = df.iloc[1:,:] print(df1.head(10)) df1.info() ``` The dataset appears to be successfully updated. When I print the dataset the first row is gone, however when I call df1.info() it still returns the original headers. Example output: ``` # Column Non-Null Count Dtype --- ------ -------------- ----- 0 StartDate 10 non-null object 1 EndDate 10 non-null object 2 Status 10 non-null object 3 IPAddress 10 non-null object 4 Progress 10 non-null object 5 Duration (in seconds) 10 non-null object 6 Finished 10 non-null object 7 RecordedDate 10 non-null object 8 ResponseId 10 non-null object 9 RecipientLastName 0 non-null object 10 RecipientFirstName 0 non-null object 11 RecipientEmail 0 non-null object 12 ExternalReference 0 non-null object 13 LocationLatitude 10 non-null object 14 LocationLongitude 10 non-null object 15 DistributionChannel 10 non-null object 16 UserLanguage 10 non-null object 17 Q6#1_1 10 non-null object 18 Q6#1_2 10 non-null object 19 Q6#1_3 10 non-null object 20 Q6#1_4 10 non-null object 21 Q6#1_5 10 non-null object 22 Q6#1_6 10 non-null object 23 Q6#1_7 10 non-null object 24 Q6#1_8 10 non-null object 25 Q6#1_9 10 non-null object 26 Q6#2_1 10 non-null object 27 Q6#2_2 10 non-null object 28 Q6#2_3 10 non-null object 29 Q6#2_4 10 non-null object 30 Q6#2_5 10 non-null object 31 Q6#2_6 10 non-null object 32 Q6#2_7 10 non-null object 33 Q6#2_8 10 non-null object 34 Q6#2_9 10 non-null object 35 Q8 10 non-null object 36 Q9 10 non-null object dtypes: object(37) memory usage: 3.0+ KB ``` These are all field names from the first row of the dataset. Is there a way to get this to update and show the field names from row 2 of the data? Can someone explain the underlying mechanism of what's going on here? How is the first row even stored in the new dataframe at all if I specified only the second row on should be included when I declared the variable?
Why does pandas.dataframe.info() not update when I delete the first row of a dataset?
CC BY-SA 4.0
null
2023-05-22T13:50:24.953
2023-05-22T23:07:01.463
null
null
76182
[ "pandas", "data-cleaning", "dataframe" ]
The solution is better achieved via, ``` df = pd.read_csv('CAR_551.csv', skiprows=[0]) ``` --- I checked the solutions. Solution 1 ``` df.columns = df.iloc[0] ``` There's a problem because a blank line (or line of junk) is now carried into the dataframe. The outcome will depend on what the first row of the csv is. So the minimum it will do is append a blank line above dataframe. Thus, `df.to_csv('myfile')` will start with a blank line, before the column headers. Its not clear what this is doing to internal dataframe operations. More seriously it can also include elements of the first junk row of the csv file. In my example I had a column name at iloc[24], axis=1 retained, which junk. It appears the behaviour is unpredictable. It would cause problems if this was reimported, because the same problem of the first row is junk continues. Solution 2 ``` df = pd.read_csv('CAR_551.csv', skiprows=[0]) ``` Simply doesn't import the junk blank line and new 'phantom' lines are no longer part of the dataframe. The `to_write` is perfect, the first line is the header.
Why is pandas corr() deleting columns?
Pearson's correlation is the default correlation used with Pandas corr method. Categorical features ( not numerical ) are ignored during this process due to their nature of not being continuous. It makes no sense to say if categorical_var1 is increased by one , categorical_var2 also increases by X ( X's value depends on the correlation between the 2 variables ). That's why you only see numerical variables! There are other statistical tests you can apply to categorical variables to better understand them. Note : some columns may appear as numerical at first glance, but a string may be there due to an input mistake, or simply when the formatting of the file was done, that column type was set to 'Object'. Make sure to test the values in your supposedly numerical columns and apply astype to set them back to int or float
121722
1
121741
null
0
15
I would like to train object detection model (e.g. YOLO) for images that contain anomalies. The anomalies are essentially the holes in a surface of different sizes. How do I label correctly such anomalies? Do I put the bounding boxes over each small hole or should I group smaller anomalies into one?
Best practice labeling grouped anomalies for object detection
CC BY-SA 4.0
null
2023-05-24T09:44:55.393
2023-05-25T08:04:16.237
2023-05-24T11:27:04.383
14529
14529
[ "deep-learning", "cnn", "object-detection", "anomaly-detection", "labels" ]
When labeling anomalies in images, it's important to be consistent and clear in your approach. In the case of holes in a surface, you have a few options for labeling. One approach, like you mentioned, is to label each individual hole with its own bounding box which approach allows for more precise detection of each anomaly and can be useful if you need to know the location and size of each hole. Alternatively, you could group smaller anomalies together into one bounding box. This approach may be more efficient and easier to label, but may result in less precise detection of individual anomalies. Ultimately, the approach you choose will depend on your specific use case and the level of precision required for detection. Do you have any more information about the holes and your end goal? Being more specific or providing examples may help others answer your question.
Detecting anomalies with neural network
From the formulation of the question, I assume that there are no "examples" of anomalies (i.e. labels) whatsoever. With that assumption, a feasible approach would be to use [autoencoders](https://en.wikipedia.org/wiki/Autoencoder): neural networks that receive as input your data and are trained to output that very same data. The idea is that the training has allowed the net to learn representations of the input data distributions in the form of latent variables. There is a type of autoencoder called [denoising autoencoder](https://en.wikipedia.org/wiki/Autoencoder#Denoising_autoencoder), which is trained with corrupted versions of the original data as input and with the uncorrupted original data as output. This delivers a network that can remove noise (i.e. data corruptions) from the inputs. You may train a denoising autoencoder with the daily data. Then use it on new daily data; this way you have the original daily data and an uncorrupted version of those very same data. You can then compare both to detect significant differences. The key here is which definition of significant difference you choose. You could compute the euclidean distance and assume that if it surpasses certain arbitrary threshold, you have an anomaly. Another important factor is the kind of corruptions you introduce; they should be as close as possible to reasonable abnormalities. Another option would be to use [Generative Adversarial Networks](https://en.wikipedia.org/wiki/Generative_adversarial_networks). The byproduct of the training is a discriminator network that tells apart normal daily data from abnormal data.
121745
1
121747
null
0
33
The task is to predict sentiment from 1 to 10 based on Russian reviews. The training data size is 20000 records, of which 1000 were preserved as a validation set. The preprocessing steps included punctuation removal, digit removal, Latin character removal, stopword removal, and lemmatization. Since the data was imbalanced, I decided to downsample it. After that, TF-IDF vectorization was applied. At the end, I got this training dataset: [](https://i.stack.imgur.com/EWo0m.png) The next step was the validation set TF-IDF transformation: [](https://i.stack.imgur.com/HAv7x.png) As a classifier model, I chose MultinomialNB (I read it is useful for text classification tasks and sparse data). The training data fit was pretty quick: ``` # TODO: create a Multinomial Naive Bayes Classificator clf = MultinomialNB(force_alpha=True) clf.fit(X_res, y_res.values.ravel()) ``` But the problem was in model evaluation part: ``` # TODO: model evaluation print(clf.score(X_res, y_res.values.ravel())) print(clf.score(X_val, y_val.values.ravel())) y_pred = clf.predict(X_val) print(precision_recall_fscore_support(y_val, y_pred, average='macro')) ``` Output: ``` 0.9352409638554217 0.222 (0.17081898127154763, 0.1893033502842826, 0.16303596541199034, None) ``` It is obvious that the model is overfitting, but what do I do? I tried to use SVC, KNeighborsClassifier, DecisionTreeClassifier, RandomForestClassifier, and GaussianNB, but everything remained the same. I tried to play around with the MultinomialNB hyperparameter `alpha` but `force_alpha=True` option is the best so far.
Why my sentiment analysis model is overfitting?
CC BY-SA 4.0
null
2023-05-25T10:00:44.040
2023-05-26T06:06:23.947
2023-05-26T06:06:23.947
150186
150186
[ "classification", "nlp", "text-classification", "sentiment-analysis", "tfidf" ]
There might be multiple reasons that might be the reason for overfitting some of which are: 1.) Scaling the data 2.) You have not mentioned which parameter values you have selected in the Tfidf vectorizer. Some of them might help to reduce overfitting. `ngram_range` and `max_features` are 2 which you can play around with. 3.) Make sure you are using `fit_transform` on the train set only and not on the test set for both tfidf and scaling. Use only `transform` for the test set. 4.) Try to tune the hyperparameters of other models such as `RandomForest` and `SVC`. 5.) Use other word embedding techniques such as `Word2Vec`, `Glove` or `Fasttext` as they capture the word context as well as opposed to just the word frequency (which is happening in the case of tfidf). 6.) Try different models. You are just testing 4-5 models when in fact there are so many classification models out there. Try as many as you can to see which one gives the best result. 7.) Last but not the least,increase the data size. Since you are down sampling the data (I don't know by how much), this also might be a factor in overfitting. Try to implement all of the above points and let me know whether results improve. Cheers!
Why is my model overfitting?
This isn't overfitting. You're reporting cross-validation scores as very high (and are not reporting training set scores, which are presumably also very high); your model is just performing very well (on unseen data). That said, you should be asking yourself if something is wrong. There are two common culprits that come to mind: - One of your features is very informative, but wouldn't be available at prediction time ("future information", or in the extreme case, you accidentally left the target variable in the independent variable dataframe) - Your train-test splits don't respect some grouping (in the extreme case, rows of the frame are repeated and show up in both training and test folds). Otherwise, it's entirely possible your problem is just easily solved by your model. See also [Why does my model produce too good to be true output?](https://datascience.stackexchange.com/q/84567/55122) [Quote on too good to be true model performance?](https://stats.stackexchange.com/q/562808/232706)
121759
1
121763
null
0
19
From what I understand my code is telling me that my base model is performing at 96% on it's training data, 55% on it's test data. And my SMOTE model is performing at ~96% on both. From my understanding, the SMOTE model performing 96% on it's test data implies that on any new data it is given, it should perform at around 96%. However when I introduce a brand new dataset of identical data from a different time period, it's performing significantly worse. Is anyone able to tell me if there's something I've missed/overlooked with the code below? If not, I know to look into the new dataset I've added to look for problems. My only possible lead at the moment is that I've used the SKLearn.preprocessing OrdinalEncoder for both the main & brand new dataset to turn continuous non-integer codes into integers, which I wonder may be causing a mis-match between datasets. I've attached the code for the main model below. ``` df = df.filter(["Feature1","Feature2","Feature3","Feature4", "Feature5","Feature6","Feature7", "Feature8","TargetClassification"]) y = df["TargetClassification"].values X = df.drop("TargetClassification",axis=1) sm = SMOTE(random_state=42) X_sm, y_sm = sm.fit_resample(X,y) XB_train, XB_test, yB_train, yB_test = train_test_split(X,y,train_size=0.7) XS_train, XS_test, yS_train, yS_test = train_test_split(X_sm,y_sm,train_size=0.7) my_SMOTE_model = RandomForestClassifier(n_estimators=100,criterion="gini",random_state=1,max_features=4) my_BASE_model = RandomForestClassifier(n_estimators=100,criterion="gini",random_state=1,max_features=4) my_BASE_model.fit(XB_train,yB_train) y_pred = my_BASE_model.predict(X) BASE_train_acc = round(my_BASE_model.score(XB_train, yB_train)*100,2) print(f"Base model training accuracy: {BASE_train_acc}") my_SMOTE_model.fit(X_sm,y_sm) y_sm_pred = my_SMOTE_model.predict(X_sm) SMOTE_train_acc = round(my_SMOTE_model.score(XS_train,yS_train)*100,2) print(f"SMOTE model training accuracy: {SMOTE_train_acc}") # Prints Base as 96.05, SMOTE as 96.38 yB_test_prediction = my_BASE_model.predict(XB_test) yS_test_prediction = my_SMOTE_model.predict(XS_test) BASE_test_acc = accuracy_score(yB_test,yB_test_prediction) SMOTE_test_acc = accuracy_score(yS_test,yS_test_prediction) print(f"Base model test accuracy: {BASE_test_acc}") print(f"SMOTE model test accuracy: {SMOTE_test_acc}") #Prints Base as 54.9%, SMOTE as 96.5% ``` Thank you for any help
Struggling with understanding RandomForest model with SMOTE
CC BY-SA 4.0
null
2023-05-25T18:17:03.183
2023-05-27T00:08:56.613
2023-05-25T19:46:22.587
149919
149919
[ "machine-learning", "python", "scikit-learn", "random-forest", "smote" ]
Using different ordinal encoders [is certainly not good](https://stackoverflow.com/q/48692500/10495893), but you've also made the error of applying SMOTE before the train-test split ([[1]](https://datascience.stackexchange.com/q/15630/55122), [[2]](https://datascience.stackexchange.com/q/104428/55122)), making the test score optimistically biased. Also, [accuracy is not a great metric](https://stats.stackexchange.com/q/312780/232706), especially in imbalanced settings. Finally, "identical data from a different time period" may well display significantly different relationship between the independent and dependent variable, so some degradation is not unexpected.
Overfitting for minority class after SMOTE w/ random forests
smote algorithm depends on the data set you have. If you have severe data imbalance, like the one you have in your case smote algorithm may not be able to help if the variations within the minority class is very high and the similarities between the two classes is very high. But How to know if this is the case. Try to duplicate samples from the minority class train a non linear svm and check the results if the classification accuracy is very low then this the case. Smote use knn to create new samples but if the variation within the minority class is very high then using smote will use samples that are not real neighbors even. To be honest with you , there is no clear solution for this problem but i can suggest the followings: 1. Try borderline smote : it is a modified version of smote algorithm 2. Try smote boosting : it is a modified version of adaboost where adaboost algorithm is augmented with smote 3. If you can modify the smote boost to consider borderline smote instead of smote
121782
1
121787
null
2
195
I will first tell you about the context then ask my questions. The model detects hate speech and the training and testing datasets are imbalanced (NLP). My questions: - Is this considered a good model? - Is the False negative really bad and it indicates that my model will predict a lot of ones to be zeros on new data? - Is it common for AUC to be higher than the recall and precision when the data is imbalanced? - Is the ROC-AUC misleading in this case because it depends on the True Negative and it is really big? (FPR depends on TN) - For my use case, what is the best metric to use? - I passed the probabilities to create ROC, is that the right way? [](https://i.stack.imgur.com/59Z1Z.png) Edit: I did under-sampling and got the following results from the same model parameters: [](https://i.stack.imgur.com/kebJh.png) Does this show that the model is good? or can it be misleading too?
Some simple questions about confusion matrix and metrics in general
CC BY-SA 4.0
null
2023-05-26T16:07:54.427
2023-05-27T07:08:33.833
2023-05-26T19:48:52.990
126059
126059
[ "machine-learning", "nlp", "class-imbalance", "metric", "confusion-matrix" ]
The first model where the `f1_score` is around 61% can not be considered as a good model. You can achieve much better results than that. This can be seen in the second case (where you have downsampled the dataset), where the `f1_score` increases substantially. Since your problem statement is to detect hate speech, you would have to decrease both, the FP and the FN or in other words, increase the `precision` and `recall`. I would the say the metric in this case would be the `f1_score` which is a combination of `precision` and `recall`. Also instead of downsampling, try oversampling. Or better yet, do neither and instead use other techniques to counteract the imbalance (think cross validation particulary `RepeatedStratifiedCV`, or maybe get more data for the minority class not by oversampling but from the authentic sources. )
Constructing the Confusion matrix from given metrics
[edit thanks to comment] I'm assuming this is a binary classifier, since normally a multi-class classifier would not be evaluated with precision/recall (it would require micro/macro precision/recall). Yes, that should be enough: - accuracy = 92.7%: $$\frac{TP+TN}{110}=0.927 \rightarrow TP+TN=101.97$$ This means we have 102 correct predictions, so $FP+FN= 8$ incorrect predictions (since $TP+FP+TN+FN=110$). - precision = 96.9%: $$\frac{TP}{TP+FP}=0.969 \rightarrow TP=31.258\times FP$$ - recall = 95%: $$\frac{TP}{TP+FN}=0.950 \rightarrow TP=19 \times FN$$ This gives us: $$\frac{TP}{31.258}+\frac{TP}{19}=8 \rightarrow TP = 94.6$$ let's assume that means 95 true positive instances, so we get: - $FP = 3$ - $FN = 5$ - $TN = 7$
121784
1
121788
null
1
24
I have ~78k microscopy images of single cells, where the task is to classify for cancer (binary classifier). The images are labeled according to which patient the data came from. I do the train-val split, making sure no patient has images in both train and validation. I noticed that depending on which patients I put in the validation set (one malignant patient, one benign patient, always perserving 20% validation size and about the same class distribution) I get wildly different validation accuracies. Below is a plot of a test I did, where I tried all permutations of validation set for each patient with cancer. The dashed lines marks where a new patient with cancer is replaced in the validation set. It seems that it is which patient with cancer I put in the validation set that influences the validation accuracy heavily. [](https://i.stack.imgur.com/hZXrY.png) My question is, what does this tell me and are there any popular methods for dealing with similar situations? My thinking is that I should train the model using the split in the dashed group number 3 in the plot, since it has the highest validation accuracy without lowering training accuracy, but then again maybe those results are due to some unknown leak. EDIT: It should be noted that the images are labeled according to if they came from a patient with cancer or not, not whether the cell actually is cancerous. Below is an example of what the pictures look like, with very little difference between all images as far as what I can see with my eyes. [](https://i.stack.imgur.com/bfjvK.jpg)
Different validation sets give very different results. What can be the reason?
CC BY-SA 4.0
null
2023-05-26T20:03:15.270
2023-05-27T04:18:21.063
2023-05-26T21:05:40.917
150250
150250
[ "machine-learning", "deep-learning", "image-classification", "image-preprocessing" ]
Different validation splits will give different results because the data points will vary. How severe can the change of results be depends on how different the data points are. One way to reduce this impact is to use `CrossValidation` while training your model. Since you have a case of Binary Classification, you should go for `StratifiedCV`. This helps your model to capture the majority of the diverseness of the dataset. Also since you mention that the majority of the images are similar (as far as you can tell), you should use `image augmentation` techniques. `Keras` has a helpful library which you can use. This will help your model to become more robust to any diverseness it might encounter when deployed. These 2 methods will definitely solve your issue! Cheers!
Why would a validation set wear out slower than a test set?
It is difficult to say without access to the original author. However, I expect this refers to the ability of using each set to realise its purpose. A validation set's purpose is to select hyperparameters that perform the best according to some metric. The best measurement on the validation set should always have the highest expectation of being the best in reality. If you make very many measurements, then the absolute probability of the best measurement being the real best could be low, but the chances of a generally poorly performing set of hyperparameters winning overall do not increase as fast. You can be reasonably certain that you have picked "one of the best" plus "the one with highest probability of being the best" even though that might be e.g. just a 10% chance if you have run 100s of validations. A test set's purpose is to measure a metric without bias. If you use this for model comparison or selection, then this can be affected by maximisation bias - because there is uncertainty in the measurement, focusing on the relative values and picking a "best" almost certainly over-estimates the true value. This effect happens very quickly. If you measure metrics for two sets of hyperparameters and pick the best one, you should already expect that the value you got for the metric is an over-estimate. Note you still expect on average that you have picked the better option, but you cannot trust the measurement as much.
121797
1
121837
null
1
110
Suppose Prof. X goes to a road side tea-coffee shop everyday at 5pm just after his office. After reaching there he tosses a coin, and places his order tea or coffee. The shop owner Y has been observing this for one month. By watching some movies he has learnt a bit of probability. Y wants to predict what the professor will order everyday. I have 3 questions which I need to solve: (i) Please build a mathematical model for Y. Precisely describe and justify. (ii) Derive a solution for that model if required, and then (iii) write an algorithm how Y can predict what X will order.
How to predict what someone will order?
CC BY-SA 4.0
null
2023-05-27T11:35:39.607
2023-05-29T16:49:26.020
null
null
150257
[ "data-mining", "machine-learning-model", "prediction", "algorithms" ]
I think this is a problem that may be solved using distribution functions. I. The mathematical model for `Y` is a Bernoulli distribution. The Bernoulli distribution is a probability distribution that describes the outcome of a single trial of an experiment with two possible outcomes, such as a coin flip. In this case, the two possible outcomes are that the professor will order tea or coffee. The probability of the professor ordering tea is denoted by `p`, and the probability of the professor ordering coffee is denoted by `1-p`. The shop owner `Y` has been observing the professor for one month, and he has observed that the professor orders tea 60% of the time and coffee 40% of the time. This means that `p = 0.6` and `1-p = 0.4`. II. The following formula gives the solution for the Bernoulli distribution: ``` P(X = x) = p^x (1-p)^(1-x) ``` Where `x` is the number of successes (in this case, the number of times the professor orders tea) and `n` is the total number of trials (in this case, the number of days that the shop owner has been observing the professor). We want to find the probability that the professor orders tea on any given day. The number of successes is `x = 1`, and the total number of trials is `n = 30` (the number of days a month). Plugging these values into the formula, we get: ``` P(X = 1) = 0.6^1 (1-0.6)^(30-1) = 0.6^1 (0.4)^(29) ``` This means that the probability that the professor orders tea on any given day is `0.6^1 (0.4)^(29) = 0.377`. III. The algorithm for `Y` to predict what the professor will order is as may be: - Generate a random number between 0 and 1. If the random number is less than p, then predict that the professor will order tea. Otherwise, predict that the professor will order coffee. For example, if the random number is 0.5, then Y would predict that the professor will order tea, because 0.5 is less than p = 0.6. The accuracy of this algorithm will depend on the value of p. If p is close to 0 or 1, then the algorithm will be very accurate. However, if p is close to 0.5, then the algorithm will not be as accurate. In this case, p = 0.6, so the algorithm is expected to be about 60% accurate. This means that for every 100 days that the shop owner uses the algorithm, he will predict the professor's order correctly 60 times. Hope it helps!
How to predict customer's next purchase
Take a look at association rule learning ([https://en.wikipedia.org/wiki/Association_rule_learning](https://en.wikipedia.org/wiki/Association_rule_learning)). A really common algorithm is the Apriori agorithm. You could use the package apyori, it works great: [https://pypi.python.org/pypi/apyori/1.1.1](https://pypi.python.org/pypi/apyori/1.1.1)
121854
1
121855
null
0
22
I am new to ML and trying to solve problem of text segmentation. I have a transcript of news show and I want to split this transcript into parts by topic. I tried to google and asked chatgpt and found a lot of info, but I don't understand how to properly run this task. It looks like a classic problem and I cant find proper naming for it. I am looking for help to find proper names for this problem, and, how to approach it with existing tools. My initial thought was to use word embeddings -> sentence vectors with rolling average to detect changes in topics, but this approach does not work. What are other ways to solve this problem?
Text segmentation problem
CC BY-SA 4.0
null
2023-05-30T16:17:28.100
2023-05-30T17:18:06.247
null
null
150337
[ "nlp", "scikit-learn", "word-embeddings", "text", "gensim" ]
The problem you are describing is not a classic NLP problem. There is a similar classic NLP problem called "topic modelling", which consists of discovering topics in a collection of text documents. Topics are defined by a list of words relevant to the topic itself. The most paradigmatic approach to this problem may be Latent Dirichlet Allocation (LDA). It is an unsupervised learning approach. Your problem, nevertheless, has somewhat also been approached from a machine learning perspective, at least partially. I can refer you to the article [Unsupervised Topic Segmentation of Meetings with BERT Embeddings](https://arxiv.org/pdf/2106.12978.pdf) by Meta. This is its abstract: > Topic segmentation of meetings is the task of dividing multi-person meeting transcripts into topic blocks. Supervised approaches to the problem have proven intractable due to the difficulties in collecting and accurately annotating large datasets. In this paper we show how previous unsupervised topic segmentation methods can be improved using pre-trained neural architectures. We introduce an unsupervised approach based on BERT embeddings that achieves a 15.5% reduction in error rate over existing unsupervised approaches applied to two popular datasets for meeting transcripts. The authors released their source code at [github](https://github.com/gdamaskinos/unsupervised_topic_segmentation). To understand its contents, you will need to have some background on [BERT](https://huggingface.co/blog/bert-101), an NLP neural network based on the [Transformer](https://arxiv.org/abs/1706.03762) architecture's encoder part. On [https://datascience.stackexchange.com/](https://datascience.stackexchange.com/) you can find plenty of specific questions and answers about it (and you can ask more if you don't find your specific doubts).
Text processing
Since you are going to use TF-IDF representations, you already have a feature matrix. To calculate cosine similairty between all vectors, you can use: ``` from sklearn.metrics.pairwise import cosine_similarity similarity = cosine_similarity(tfidfmat) #tfidfmat is your TF-IDF matrix ``` #Use numpy arrays To begin clustering, you can use K-means algorithm to begin with, and use cosine similairty as the distance metric. [Here's](https://www.google.co.in/url?sa=t&source=web&rct=j&url=http://scikit-learn.org/stable/auto_examples/text/document_clustering.html&ved=0ahUKEwiL546uzfPRAhUFS48KHaPUD_YQFggiMAA&usg=AFQjCNGEotteEEXQ0LYCgdYBkfueBqYdiw&sig2=V5p4Eo89BPexxI8oNVCfGA) an example from scikit-learn itself on clustering documents. Further things to try: If you find the above methods not working to your expectations, look into word2vec and doc2vec, and instead of using tfidf, which isa Bag of Words approach, use word vector representations. [Here](https://www.google.co.in/url?sa=t&source=web&rct=j&url=http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/&ved=0ahUKEwjMrvT1zfPRAhXLu48KHaICBR4QFghuMAk&usg=AFQjCNFRBSxWBWA8Qw1C5rn0pvyHEwjNqw&sig2=-8VAcUKRz8yOOxWpwRwhAg) is a good blog explaining the concept.
121890
1
121891
null
0
31
My objective is to experiment with various approaches for different algorithms, identify the best approach for each algorithm, and subsequently determine the best overall algorithm from among these top approaches. To accomplish this, I employed k-fold cross-validation to evaluate each approach. After conducting the evaluations, I selected the approach that yielded the most optimal metric. To simplify things, let's consider linear regression. I tried different approaches by changing techniques and steps. To assess their performance, I evaluated each approach using k-fold cross-validation. Let's say I found that approach 2 performed the best for linear regression. Without training the model with new data, I moved on to the next algorithm, which was ANN. Following a similar process, I evaluated different approaches for ANN using k-fold cross-validation. This time, approach 3 turned out to be the best. Finally, I compared approach 2 for linear regression with approach 3 for ANN and chose the superior approach. I then trained the model using the selected approach and model. Am I proceeding in the correct direction ?
Is this the best method for comparing different approaches nd selecting the best model in machine learning?
CC BY-SA 4.0
null
2023-06-01T10:21:02.890
2023-06-01T17:27:06.770
null
null
150389
[ "machine-learning", "cross-validation", "model-selection" ]
Evaluation Metrics. For regression problems metrics like MSE or RMSE (is less sensitive to extreme values) are good defaults. For classification instead you can evaluate against accuracy if classes are balanced, otherwise look at the AUC of the ROC or PR (precision-recall) curves. In addition, the f1-score is also quite common, but in some other cases you may care more about errors and so the confusion matrix would give you an overview of the kind of error your model(s) made. Basically, you pick one metric e.g. RMSE (for regression) and AUROC (AUC of ROC for classification), compute that for all your models and rank them accordingly. These metrics can be also used for selecting the best NN across training epochs (indeed you need to compute that on a validation-set.) Compare and select models. Since training one model (of one kind) gives you only a point estimate of its overall performance, which is an approximation, because training and test data are just limited. Moreover, there could be randomness in the model and/or training process that, at each run, may yield a different model with different performance. Especially if you have not so many data, K-fold cross-validation allows you to estimate the bias and variance of your model quite easily. K-fold cross-validation allows you to estimate uncertainties related to the model and data. However, say your $k=10$ so would obtain $k$ models for each kind of them: you evaluate then on the metrics you care, and, basically, obtain a distribution of performance for each model class. You should then aggregate the performance on your evaluation metric, obtaining average performance (e.g. by taking the mean) but also its standard deviation (i.e. variability in model predictions). For example, say model-1 achieves the best average but its std is quite large, while model-2 1% lower but the std is almost zero. So, what model do you choose? When selecting the model you should consider both mean and std, or the overall distribution. To help yourself you can inspect a boxplot of the performance distribution of each class of models, such that you can visualize both average performance and the their associated variability. In alternative is also possible to compute a $p$-value that provides you the probability that one class of models (e.g. SVM) is better than another (e.g. neural-nets).
Determining which model result is better
Another way to approach the problem is to take all of the trained models and compare each of their performances on the same hold-out dataset. This is the most common way to evaluate machine learning models. Choosing the evaluation metric to use depends on the goal of the project. Most machine learning projects care about predictive ability. R² is not a useful metric for the predictive ability of a model. RMSE can be a useful metric of predictive ability. However, since the errors are squared is sensitive to the properties of the data. You mention that you are using different data. Those differences in data could impact comparing RMSE across different sources. Comparing different models on the same dataset would be better when using RMSE.
121906
1
121917
null
0
8
I am attempting to determine the most useful bands of a multiband image classification (i.e. Red, Green, Blue, Near Infrared, etc. used for classifying pixels) and wrote the following function to build a decision tree. It uses [sci-kit learn's Decision Tree Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.feature_importances_) with entropy as the split criterion. Finally, it uses the [feature_importances_](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.feature_importances_) function to calculate the importance of each band: ``` def make_tree(X_train, y_train): """prints a decision tree and an array of the helpfulness of each band""" dtc = DecisionTreeClassifier(criterion='entropy') dtc.fit(X_train, y_train) tree.plot_tree(dtc) plt.show() importances = dtc.feature_importances_ large_to_small_idx = np.argsort(importances)[::-1] for idx in large_to_small_idx: print(f"Band {idx + 1}: {importances[idx]}\n") ``` I assumed that since the splitting criterion on the decision tree was set to entropy that `feature_importances_` would also be calculated as some form of entropy information gain. However, in sci-kit learn's documentation it mentions how the feature importance is actually calculated: > The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. Is this an issue or is the feature importance essentially still being calculated based on entropy? If this is not a good way to calculate feature importance based on entropy, is there a way to tweak `feature_importances_` or some other method I am missing to do this? Thanks for the help!
Calculating feature importance with Scikit-Learn's Decision Tree Classifier
CC BY-SA 4.0
null
2023-06-01T18:53:48.353
2023-06-02T10:51:08.863
2023-06-01T20:01:52.713
150412
150412
[ "scikit-learn", "decision-trees" ]
The entropy criterion is used by the CART algorithm to build the DT itself, by evaluating which split is actually the best (greedily) to split on. So, it's not directly related to feature importance which, in case of DTs, is computed as the reduction in impurity brought by a feature. This is not an error, it's just by design: actually, it is an extra capability that DTs have. You can also estimate feature importance with Random Forests and Extra Trees which should provide more accurate results since they compute that from an ensemble of models. Indeed, the way it's computed is still based on impurity reduction, which you can think of being a quantification of how much a feature improves the model's performance.
tree.DecisionTree.feature_importances_ Numbers correspond to how features?
You can take the column names from `X` and tie it up with the `feature_importances_` to understand them better. Here is an example - ``` from sklearn.datasets import load_iris from sklearn.tree import DecisionTreeClassifier import pandas as pd clf = DecisionTreeClassifier(random_state=0) iris = load_iris() iris_pd = pd.DataFrame(iris.data, columns=['sepal_length', 'sepal_width', 'petal_length', 'petal_width']) clf = clf.fit(iris_pd, iris.target) ``` I am taking the iris example, converting to a `pandas.DataFrame()` and fitting a simple `DecisionTreeClassifier`. Once the training is done, you can take the `columns` attribute of a pandas `df` and make a `dict` with the `feature_importances_` output. ``` print(dict(zip(iris_pd.columns, clf.feature_importances_))) ``` This will give you what you want - ``` {'sepal_length': 0.0, 'sepal_width': 0.013333333333333329, 'petal_length': 0.064055958132045052, 'petal_width': 0.92261070853462157} ``` Hope this helps!