Id
stringlengths
1
6
PostTypeId
stringclasses
6 values
AcceptedAnswerId
stringlengths
2
6
ParentId
stringlengths
1
6
Score
stringlengths
1
3
ViewCount
stringlengths
1
6
Body
stringlengths
0
32.5k
Title
stringlengths
15
150
ContentLicense
stringclasses
2 values
FavoriteCount
stringclasses
2 values
CreationDate
stringlengths
23
23
LastActivityDate
stringlengths
23
23
LastEditDate
stringlengths
23
23
LastEditorUserId
stringlengths
1
6
OwnerUserId
stringlengths
1
6
Tags
sequence
121867
1
null
null
0
19
I have a project where I would like to make predictions using previously created model objects. The model objects where both build using scikit-learn but on different library versions (0.22.1 and 1.2.0). My questions are as follows: - I have always been under the assumption that if you are going to make predictions with a model object you should have an environment that matches the environment that the object was built on. Is this truly the case? (I often get trying to unpickle estimator from version warning) - What would be the safest way to score predictions in the same notebook/environment using these two models? I would prefer to not rebuild the model. Would converting the scikit-learn/joblib saved objects to ONNX be a way around this?
Using model objects created using different scikit-learn versions
CC BY-SA 4.0
null
2023-05-31T14:02:12.973
2023-05-31T14:02:12.973
null
null
150360
[ "python", "scikit-learn" ]
121868
2
null
121856
0
null
Use the following code before you fit the model - ``` from sklearn.preprocessing import LabelEncoder le = LabelEncoder() y_train = le.fit_transform(y_train) rf.fit(X_train, y_train) ``` (Also, please consider accepting one (with the tick mark ✓ next to it) if you consider it correct )
null
CC BY-SA 4.0
null
2023-05-31T14:34:58.147
2023-05-31T14:34:58.147
null
null
144743
null
121870
2
null
75428
0
null
not complete but I tried this: ``` from sklearn.preprocessing import OneHotEncoder one_hot_encoder = OneHotEncoder(handle_unknown='ignore') results = one_hot_encoder.fit_transform(df) df_results = pd.DataFrame.sparse.from_spmatrix(results) df_results.columns = one_hot_encoder.get_feature_names(df.columns) df_results ``` The LSTM accepts it but it doesnt predict because it seems to need one more transformation. I am at this stage, I try to solve a similar problem, I hope it helps a bit..
null
CC BY-SA 4.0
null
2023-05-31T15:45:00.080
2023-05-31T15:46:44.737
2023-05-31T15:46:44.737
150347
150347
null
121871
2
null
121857
0
null
Welcome to the forum @kodkod! In the context of an encoder-decoder architecture, I’ve used the following approach. Your problem is slightly different so your mileage may vary. I had a pad_sentences method – so find the largest sentence in each batch, and pad the other sentences to be the same length. I was doing this by manually appending pad tokens before embedding them, but pytorch has a pad_sequence function which will stack a list of tensors and then pad them. Generate_sentence_masks – this takes encodings, the list of actual source lengths, and returns a tensor which contains 1s in the position where there was an actual token, and 0s in positions that were padded. These sentence masks are then passed to the decode method along with the encoder hidden states, decoder initial state, and the padded targets. In the encode method, the padded input is embedded, and then packed with pack_padded_sequence. The packed padded sequence is then run through the encoder LSTM to generate the hidden states. There are a few good tutorials on LSTMs including one here that does sentiment analysis with LSTMs: [https://www.kaggle.com/code/arunmohan003/sentiment-analysis-using-lstm-pytorch](https://www.kaggle.com/code/arunmohan003/sentiment-analysis-using-lstm-pytorch) hth.
null
CC BY-SA 4.0
null
2023-05-31T16:01:14.573
2023-05-31T16:01:14.573
null
null
146483
null
121872
1
null
null
0
10
Most implementations that I've seen of LDA seem to use simple word counts when giving the document-term frequency matrix. What would happen if we were to give the tf-idf matrix instead of a simple word count matrix?
Doing LDA (NLP task): Does it make sense to use tf-idf vectors?
CC BY-SA 4.0
null
2023-05-31T16:42:48.747
2023-05-31T16:42:48.747
null
null
51858
[ "tfidf", "lda" ]
121873
1
null
null
0
16
I am working on my thesis, which has two different research questions: - Evaluate transformer models while incorporating non textuel features - Evaluate the importance of data quality in tranformer models So, I have two different datasets with various qualities. My approach: The validation set was created by selecting 10% of the crowdsourced dataset, while the test set consisted of 10% of the ground truth dataset. Problem here is that the test set doesn't have much samples. Here is the distribution: - 6 samples for label 0 - 24 samples for label 1 - 74 samples for label 2 So basically the evaluation won't be that significant, but the problem is that I can't merge both validation and test set, since I am assuming that the test data has better quality? Does anyone have an idea how I can tackle this problem. Thanks !
Problem with the size of test data?
CC BY-SA 4.0
null
2023-05-31T17:13:22.033
2023-05-31T17:13:41.827
2023-05-31T17:13:41.827
150366
150366
[ "nlp", "dataset", "machine-learning-model", "preprocessing" ]
121874
1
121876
null
0
24
I'm doing a research project and want to test for correlation between different data sets. For example, I want to test if there is a correlation between median house prices and homeless population in the US by year. Here is some made up data for the problem: Year 2000 , House price $260,000, homeless pop 330,000 2005 - 270,000 - 315,000 2010 - 285,000 - 320,000 2015 - 330,000 - 340,000 2020 - 400,000 - 370,000 I want to then get (r) to measure the correlation between these two data sets and compare that strength of correlation to other data sets (for example, median house price and rates of domestic violence in the US) Thank you for the help!
Need help! How do you test for a correlation between two data sets?
CC BY-SA 4.0
null
2023-05-31T17:45:02.453
2023-05-31T19:59:46.393
2023-05-31T17:51:55.337
150367
150367
[ "statistics" ]
121875
1
null
null
0
10
As far as my understanding goes, the model used for feature extraction in [DeepSort](https://github.com/nwojke/deep_sort) is specified as the first argument of the function `create_box_encoder` in the file [tools/generate_detections.py](https://github.com/nwojke/deep_sort/blob/master/tools/generate_detections.py): ``` def create_box_encoder(model_filename, input_name="images", output_name="features", batch_size=32): image_encoder = ImageEncoder(model_filename, input_name, output_name) image_shape = image_encoder.image_shape def encoder(image, boxes): image_patches = [] for box in boxes: patch = extract_image_patch(image, box, image_shape[:2]) if patch is None: print("WARNING: Failed to extract image patch: %s." % str(box)) patch = np.random.uniform( 0., 255., image_shape).astype(np.uint8) image_patches.append(patch) image_patches = np.asarray(image_patches) return image_encoder(image_patches, batch_size) return encoder ``` In the same file, the default value of the argument `model_filename` is specified under the `parse_args()` function to be `resources/networks/mars-small128.pb`, which appears to be a model for person re-identification. Can a model for re-identifying objects other than people (and from multiple classes, such as cars, birds, trucks, etc) be used instead in DeepSort? If so, does DeepSort provide any means for training such models? My initial understanding was that DeepSort would be able to track all classes recognized by a trained YOLO model. I didn't know that a stand-alone feature extractor was required.
Can DeepSort be made to track objects beside people?
CC BY-SA 4.0
null
2023-05-31T17:58:42.393
2023-05-31T17:58:42.393
null
null
144646
[ "deep-learning", "computer-vision", "feature-extraction", "object-detection", "yolo" ]
121876
2
null
121874
0
null
It would depend on your specific problem statement, if you do not want to consider this as a time series data i.e. do not want to take year into account you would simply consider the correlation values between the home price and homeless population versus home price and domestic violence; whichever value will be high in magnitude (positively or negatively correlated) will be strongly correlated than the other ``` data_df['Home Price'].corr(data_df['Homeless pop']) data_df['Home Price'].corr(data_df['Domestic Violence rate']) ``` If you want to consider time factor ; then you would have to convert the date column into datetime column and then consider three different time series - Year and home price - Year and homeless pop - Year and domestic violence rate And then you can use granger causality test for causality or cross correlation to see the correlation between time series. You can refer to this post as well - [https://towardsdatascience.com/computing-cross-correlation-between-geophysical-time-series-488642be7bf0#:~:text=Cross%2Dcorrelation%20is%20an%20established,inference%20on%20the%20seismic%20data](https://towardsdatascience.com/computing-cross-correlation-between-geophysical-time-series-488642be7bf0#:%7E:text=Cross%2Dcorrelation%20is%20an%20established,inference%20on%20the%20seismic%20data).
null
CC BY-SA 4.0
null
2023-05-31T18:44:01.147
2023-05-31T19:59:46.393
2023-05-31T19:59:46.393
144743
144743
null
121877
1
null
null
0
10
I am currently parsing a large data set of drug prescriptions in R, because these have been collected manually (at least that's my guess) the data is extraordinarily messy and painful to deal with, because there's no consistency to the way it has been collated. My specific issue is that I need to extract the quantity of drugs which have been prescribed so that I can subsequently match them for downstream analysis. However, just to give you a taste of what these can look like: ``` quantity 28 capsules 28 84 28.000 3 millilitres 200 tablet 1 pack of 28 tablet(s) 3*1 mcg 2*91 tablets 2*200 grams 3*112 tablet - Tutti Frutti Tibolone 2.5mg tablets 3*28 tablet ``` Some of these prescriptions will be made for the same drug, just by different doctors, so there's absolutely no consistency to how they're entered. As I want to try and get the true quantity of the drugs prescribed I have been TRYING to extract that based on a few rules in R: ``` ### Find prescriptions which are multiples (e.g. "2 packs of 10 pills" or "2*10 pills") gp_prescriptions$multiples_status = grepl(" of ", gp_prescriptions$quantity) | grepl("\\*", gp_prescriptions$quantity) ### Create simple column for extracting numbers gp_prescriptions$basic_column = as.numeric(gp_prescriptions$quantity) gp_prescriptions$simple_column = gsub("[[:alpha:]]+|[[:punct:]]", "", gp_prescriptions$quantity) gp_prescriptions$simple_column = gsub("(?<=\\d)\\s+(?=\\d)", "_", gp_prescriptions$simple_column, perl = TRUE) ``` Here my logic is as follows: If there is only a number in the column (e.g. `28`) this becomes the `basic_column` If instead there are multiple numbers, I try and remove all letters and punctuation so that I get something like this `1_28`. This has the unfortunate consequence of also removing `.` though so I get numbers like `28000` as a value for something which should be `28`, but I'm hoping the existence of the `basic_column` will allow me to overcome that. What I then want to do is based on the first line, which finds rows that have "multiples" is I want to calculate the value of `2*91` (or whatever). However, I can't just do this based on it usually being the first 2 numbers in the column, so what I need to do is do something like finding the two numbers on either side of " of " or "*" and then multiply those. But I have absolutely no idea how to even get started with this. Does anyone have any suggestions or ideas for how I can deal with this? Or am I going about this in a ridiculous way in the first place?
Parsing multiple formats of values to give a true quantity with regex
CC BY-SA 4.0
null
2023-05-31T19:54:46.287
2023-05-31T19:55:55.797
2023-05-31T19:55:55.797
150372
150372
[ "r", "regex" ]
121878
1
null
null
0
21
I've trained a segmentation model using Python 3.8 environment and `segmentation_models_pytorch` aka `smp`. When I saved it and load in my prediction environment (Python 3.6 with `smp`) it worked with just ``` import torch model = torch.load(path.join('models', model_name)) ``` However it conflicts with `onnx` package (onnx requires newer Python). I've created new conda environment with Python 3.10 (another with Python 3.11). Now torch refuses to load the model with an error message `ModuleNotFoundError: No module named 'segmentation_models_pytorch.unet'`. What is the right way to - install torch that supports torch.load - torch that knows about unet architecture - on the environment that supports onnx model conversion - doesnot require some old and very special version of Python (which causes conflicts with other packages) from os import path, environ import torch torch_model = torch.load(path.join('models', 'my_model.pth')) Traceback (most recent call last): File "D:\workspace\acne_prod\pytorch2onnx.py", line 3, in <module> torch_model = torch_load(path.join('models', 'test_model.pth')) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\sixty\anaconda3\envs\acne_prod_smp_onnx\Lib\site-packages\torch\serialization.py", line 809, in load return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\sixty\anaconda3\envs\acne_prod_smp_onnx\Lib\site-packages\torch\serialization.py", line 1172, in _load result = unpickler.load() ^^^^^^^^^^^^^^^^ File "C:\Users\sixty\anaconda3\envs\acne_prod_smp_onnx\Lib\site-packages\torch\serialization.py", line 1165, in find_class return super().find_class(mod_name, name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ModuleNotFoundError: No module named 'segmentation_models_pytorch.unet'
ModuleNotFoundError: No module named 'segmentation_models_pytorch.unet' in Python 3.10 and 3.11
CC BY-SA 4.0
null
2023-05-31T20:57:08.953
2023-06-01T19:10:18.313
2023-05-31T21:05:06.090
82027
82027
[ "pytorch" ]
121879
1
null
null
0
15
I have a dataset that contains 2 columns based on experimental data. Both columns contain lists of the same length. One column equals to the argument, where the list is the same for every sample. The corresponding values of the list in the target column differ from each other, as they obviously belong to different samples. I was wondering what I can do to cluster my dataset? What I did so far, I removed the argument column and splitted the target column so that each list item leads to a new column, so that I have as many new columns as the lenght of the list was. On that I did a dimensionality reduction with clustering afterwards. But this did not lead to good results. Does anyone have another idea what to do, or a suitable approach?
Clustering 2 lists
CC BY-SA 4.0
null
2023-05-31T22:07:15.497
2023-05-31T22:07:15.497
null
null
150374
[ "python", "clustering" ]
121880
1
null
null
0
13
I am currently dealing with imbalanced classes in my case of binary classification, where one class represents only 4% of the other class. To address this issue, here is the approach I have taken: - Splitting the lower class into training and validation sets. - Choosing the same number of elements from the lower class as in the training set. - Creating a training set with an equal number of samples from both classes (50-50 ratio). - Allocating all the remaining elements to the validation set. For instance, if I have 100 elements, there would be 4 elements in the lower class. These elements could be divided into 3 for training and 1 for validation. This means that I would train the model using 3+3 elements, while all the remaining elements would be used for validation (94 elements where 1 from lower class and 93 from the other class). Now, the question is whether this approach is effective in addressing the problem. Perhaps it is better to maintain some level of class imbalance even in the training set, as the current method is resulting in a high number of false positives. Is this a suitable solution to tackle the problem at hand, or would it be more advantageous to preserve some level of class imbalance in the training data to mitigate the issue of false positives?
Imbalanced classes
CC BY-SA 4.0
null
2023-05-31T22:16:35.847
2023-05-31T22:16:35.847
null
null
150373
[ "classification", "class-imbalance", "graph-neural-network" ]
121881
2
null
121662
0
null
Can you keep track of your training loss too and plot both training and validation losses as a function of the epoch? Is the training loss also increasing with epochs?
null
CC BY-SA 4.0
null
2023-05-31T23:58:20.430
2023-05-31T23:58:20.430
null
null
150375
null
121882
1
null
null
0
22
I have a dataframe of about 200 features and 1M rows that I can train a RidgeCV model and get an R2 of about 0.01 I'd like to scale up my training to 5M or 10M rows but that won't fit in memory for me so I'm looking for out of core techniques and I read about partial_fit and SGDRegressor. I tried to setup the problem using the following params but I see that while RidgeCV with the following code 'trains' in about 20s and gives a 0.01 R2 ``` # Control -- test with RidgeCV y_all=df_targets[hyper['target_name']].values x_all = (df_targets[features].values) regmodel = make_pipeline(StandardScaler(), RidgeCV(alphas=[0.01, 0.1, 1.0, 10.0])) regmodel.fit(x_all, y_all) level_2_r2_score = regmodel.score(x_all, y_all) level_2_r2_score ``` Gives an R2 of 0.010599955481100931. I tried to formulate the equivalent type of training using SGDRegressor with partial_fit and minibatches of 32 like so: ``` # Numpy version partial fit / train loop # Initialize SGDRegressor and StandardScaler sgd = SGDRegressor(verbose=0, max_iter=1, tol=1e-3, loss='squared_error', penalty='l2', alpha=0.1, learning_rate='optimal', fit_intercept=False ) scaler = StandardScaler() CHUNK_SIZE = 32 epoch_loops = 20000 r2s=[] # train test split df_targets df_train = df_targets.iloc[:-100000] df_validation = df_targets.iloc[-100000:] x_validation = df_validation_subset[features].values y_validation=df_validation_subset[hyper['target_name']].values scaler.fit(df_targets[features].values) x_validation_scaled = scaler.transform(x_validation) x_train = scaler.transform(df_train[features].values) y_train = df_train[hyper['target_name']].values print('x_train.shape = ', x_train.shape) print('y_train.shape = ', y_train.shape) print('x_test.shape = ', x_validation.shape) print('y_test.shape = ', y_validation.shape) for j in range(0, epoch_loops): # Shuffle the rows in x_train and y_train equally x_train, y_train = shuffle(x_train, y_train) # Iterate over chunks of the numpy array x_validation in CHUNK_SIZE steps for i in range(0, len(x_validation), CHUNK_SIZE): x_chunk = x_train[i:i+CHUNK_SIZE] y_chunk = y_train[i:i+CHUNK_SIZE] sgd.partial_fit(x_chunk, y_chunk) # EvalR2 after the epoch r2 = sgd.score(x_validation, y_validation) print('EPOC {} === > r2 = {} '.format(j, r2)) r2s.append(r2) ``` But 33minutes later and 112 EPOCHS -- I'm still at an R2 of -6*10**22 -- i.e. just nowhere near what RidgeCV can do. My hope was to show I can get the same R2 using partial_fit on the 1M rows and then scale that up to 5 or 10M rows and see if I improve my R2... Is that not possible?
Why does SGDRegressor with partial_fit not converge to the same R2 as RidgeCV
CC BY-SA 4.0
null
2023-06-01T02:38:27.420
2023-06-01T03:42:11.340
2023-06-01T03:42:11.340
150377
150377
[ "scikit-learn", "ridge-regression", "sgd" ]
121883
2
null
115938
0
null
As far as I know, YOLOv7 is for 2D pose estimation for multi-person pose-estimation, where models like MediaPipe does single-person pose-estimation. For the 3D pose estimation, I am using the "3D-MPPE" model, since the pretrained models are provided. It is a single person 3D Pose Estimation model. The 3D-MPPE model has 2 inner models: RootNet and PoseNet. The RootNet estimates the root depth, which is the z-axis value of the target (relative distance from camera to the target). And the 3D-MPPE PoseNet estimates both x and y axis values of each target joints of the person. Below are links for 3D-MPPE official repos (for both PoseNet and RootNet), and article for actual usage of the 3D-MPPE model: [3D MPPE PoseNet github repo](https://github.com/mks0601/3DMPPE_POSENET_RELEASE) [3D MPPE RootNet github repo](https://github.com/mks0601/3DMPPE_ROOTNET_RELEASE) [Estimating 3D pose for athlete tracking using 2D videos and Amazon SageMaker Studio](https://aws.amazon.com/ko/blogs/machine-learning/estimating-3d-pose-for-athlete-tracking-using-2d-videos-and-amazon-sagemaker-studio/)
null
CC BY-SA 4.0
null
2023-06-01T04:44:47.473
2023-06-01T04:44:47.473
null
null
100314
null
121884
2
null
121878
0
null
Segmentation_models is a different python package from pytorch and requires separate installation. !pip install -U git+https://github.com/qubvel/segmentation_models.pytorch works for me on python 3.10.5 and later.
null
CC BY-SA 4.0
null
2023-06-01T06:03:54.250
2023-06-01T06:03:54.250
null
null
146483
null
121885
1
null
null
0
13
I have a linear model that goes as `0.1*x1 + 0.8*x2 + 3.4*x3 + 5.0*x4 + c` and this linear model was generated by using a Linear Regression. The MAE is around 0.4, MSE is around 0.6 and R2 score is 85. The goal I want to achieve here is to optimize this function i.e. to find the right values for x1, x2, x3 and x4, by given constraints such as `30 < x1 < 45`, `55 < x2 < 60`, etc., so the final number of the model would be 150 or `0.1*x1 + 0.8*x2 + 3.4*x3 + 5.0*x4 + c = 150`. I did a small research and it seems that one of the algorithms that does this kind of a linear optimization is Simplex. However, I'm a total beginner when it comes to this and the research I did showed me only methods for either minimization or maximization. What is the right term for this kind of a problem? If anyone knows a similar example as in my case, could you share it? Also, if someone has solved a problems like this in the past, would you be kind to suggest another methods for optimizing my linear problem that works good?
Simplex method for equality optimization
CC BY-SA 4.0
null
2023-06-01T08:29:04.927
2023-06-01T14:01:52.107
null
null
134419
[ "predictive-modeling", "statistics", "optimization" ]
121887
1
null
null
0
13
I'm pretty new to machine learning and am having trouble with creating my first TensorFlow convolutional neural network. I'm using datasets from [http://etlcdb.db.aist.go.jp/](http://etlcdb.db.aist.go.jp/) and trying to get my network to classify handwritten Japanese kanji. After training, I ended up with a testing accuracy of 0.93 and a validation accuracy of 0.94. However, for every testing input I pass into my network, I'm getting an output of all 0's, even with inputs from the original dataset. Where did I go wrong? I'm passing in images that have already been preprocessed and normalized. Thank you so much for your help. Below is the relevant code: ``` creating dataset ---------------------------------------------------------- IMG_HEIGHT = 92 IMG_WIDTH = 92 BATCH_SIZE = 10 training_set = keras.preprocessing.image_dataset_from_directory( IMAGES_PATH, labels="inferred", label_mode="int", color_mode="grayscale", batch_size=BATCH_SIZE, image_size=(IMG_HEIGHT, IMG_WIDTH), shuffle=True, seed=420, validation_split=0.10, subset="training" ) validation_set = keras.preprocessing.image_dataset_from_directory( IMAGES_PATH, labels="inferred", label_mode="int", color_mode="grayscale", batch_size=BATCH_SIZE, image_size=(IMG_HEIGHT, IMG_WIDTH), shuffle=True, seed=420, validation_split=0.10, subset="validation" ) my model ------------------------------------------------------------------ model = Sequential() model.add(Conv2D(16, (3, 3), 1, activation='relu', input_shape=(92, 92, 1))) model.add(BatchNormalization()) model.add(PReLU()) model.add(MaxPooling2D()) model.add(Dropout(0.1)) model.add(Conv2D(32, (3, 3), 1, activation='relu')) model.add(BatchNormalization()) model.add(PReLU()) model.add(MaxPooling2D()) model.add(Dropout(0.1)) model.add(Flatten()) model.add(Dense(6000, activation='relu')) model.add(Dropout(0.3)) model.add(Dense(3040, activation='sigmoid')) model.compile("adam", loss=tf.losses.SparseCategoricalCrossentropy(), metrics=['accuracy']) prediction script -------------------------------------------------------- im = cv2.imread(IMAGE_PATH) im = cv2.cvtColor(im, cv2.COLOR_BGR2GRAY) im = np.atleast_3d(im) im = tf.image.resize(im, [92, 92]) im = np.expand_dims(im, 0) model = keras.models.load_model(MODEL_PATH) output = model.predict(im) print("guesses") for num in sorted(output[0]): if (num > 0.5): print(chr(int(class_keys[num], 16))) print("best guess") print(np.argmax(output[0])) print(chr(int(class_keys[np.argmax(output)], 16))) ``` ```
CNN Model Outputting All Zero's
CC BY-SA 4.0
null
2023-06-01T08:48:07.437
2023-06-01T08:48:07.437
null
null
150379
[ "tensorflow", "ocr" ]
121888
2
null
18751
0
null
I cannot comment as this account is new, so I'll post this as an additional answer: When using `get_layer()` using `index = ...` instead of `layer_name = ...`, it should be noted that there is a discrepancy between using it as written by Perochkin and using it in the python-style of `model$get_layer(...)`: ``` model$get_layer(index = as.integer(5)) ``` returns the layer on the zero-based layer index, while ``` get_layer(model, index = as.integer(5)) ``` or ``` model %>% get_layer(index = as.integer(5)) ``` is 1-based, so these return different layers. I didn't find this information anywhere else, so I wanted to share, as this cost me quite some time to find out
null
CC BY-SA 4.0
null
2023-06-01T08:48:57.443
2023-06-01T08:48:57.443
null
null
150384
null
121889
1
null
null
0
11
I am trying to do open set classification by using a temperature scaled softmax and then interpreting the output probabilities as a confidence metric. However, for complete outlier inputs, the temperature scaled softmax output is 0 everywhere else which indicates low confidence but then, invariably, 1 for the middle class which I presume is to compensate for all the other 0s such that the total probability adds up to 1. How do I handle this situation? Or is there a reason why it's always the middle class that gets assigned a 1 i.e. is there a method to the madness which I can probably account for in my design?
Why does softmax give probability of 1 for outlier class?
CC BY-SA 4.0
null
2023-06-01T08:54:16.797
2023-06-01T08:54:16.797
null
null
150386
[ "machine-learning", "neural-network", "tensorflow", "pytorch", "softmax" ]
121890
1
121891
null
0
31
My objective is to experiment with various approaches for different algorithms, identify the best approach for each algorithm, and subsequently determine the best overall algorithm from among these top approaches. To accomplish this, I employed k-fold cross-validation to evaluate each approach. After conducting the evaluations, I selected the approach that yielded the most optimal metric. To simplify things, let's consider linear regression. I tried different approaches by changing techniques and steps. To assess their performance, I evaluated each approach using k-fold cross-validation. Let's say I found that approach 2 performed the best for linear regression. Without training the model with new data, I moved on to the next algorithm, which was ANN. Following a similar process, I evaluated different approaches for ANN using k-fold cross-validation. This time, approach 3 turned out to be the best. Finally, I compared approach 2 for linear regression with approach 3 for ANN and chose the superior approach. I then trained the model using the selected approach and model. Am I proceeding in the correct direction ?
Is this the best method for comparing different approaches nd selecting the best model in machine learning?
CC BY-SA 4.0
null
2023-06-01T10:21:02.890
2023-06-01T17:27:06.770
null
null
150389
[ "machine-learning", "cross-validation", "model-selection" ]
121891
2
null
121890
0
null
Evaluation Metrics. For regression problems metrics like MSE or RMSE (is less sensitive to extreme values) are good defaults. For classification instead you can evaluate against accuracy if classes are balanced, otherwise look at the AUC of the ROC or PR (precision-recall) curves. In addition, the f1-score is also quite common, but in some other cases you may care more about errors and so the confusion matrix would give you an overview of the kind of error your model(s) made. Basically, you pick one metric e.g. RMSE (for regression) and AUROC (AUC of ROC for classification), compute that for all your models and rank them accordingly. These metrics can be also used for selecting the best NN across training epochs (indeed you need to compute that on a validation-set.) Compare and select models. Since training one model (of one kind) gives you only a point estimate of its overall performance, which is an approximation, because training and test data are just limited. Moreover, there could be randomness in the model and/or training process that, at each run, may yield a different model with different performance. Especially if you have not so many data, K-fold cross-validation allows you to estimate the bias and variance of your model quite easily. K-fold cross-validation allows you to estimate uncertainties related to the model and data. However, say your $k=10$ so would obtain $k$ models for each kind of them: you evaluate then on the metrics you care, and, basically, obtain a distribution of performance for each model class. You should then aggregate the performance on your evaluation metric, obtaining average performance (e.g. by taking the mean) but also its standard deviation (i.e. variability in model predictions). For example, say model-1 achieves the best average but its std is quite large, while model-2 1% lower but the std is almost zero. So, what model do you choose? When selecting the model you should consider both mean and std, or the overall distribution. To help yourself you can inspect a boxplot of the performance distribution of each class of models, such that you can visualize both average performance and the their associated variability. In alternative is also possible to compute a $p$-value that provides you the probability that one class of models (e.g. SVM) is better than another (e.g. neural-nets).
null
CC BY-SA 4.0
null
2023-06-01T10:31:02.867
2023-06-01T17:27:06.770
2023-06-01T17:27:06.770
150390
150390
null
121892
1
null
null
0
18
Is it possible to obtain the llama model alone as open source code without using the Huggingface API so that it can be hosted on our server?
LLAMA MODEL WITHOUT USING HUGGINGFACE API
CC BY-SA 4.0
null
2023-06-01T12:42:58.400
2023-06-01T15:14:50.360
null
null
150395
[ "python", "nlp", "scikit-learn", "machine-learning-model" ]
121893
1
null
null
2
22
There are pre-trained models outputting [Image Feature Vectors](https://www.tensorflow.org/hub/common_signatures/images) like `https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2`. While from the name one can deduce the architecture (`EfficientNetV2`) and the training data set (`ImageNet-21K`), I'm interested in how the training process was done. Was it trained "classically" for classification with some dense layers at the end that were chopped off after training? Or was some other technique like [triplet loss](https://en.wikipedia.org/wiki/Triplet_loss) applied?
How are the TensorFlow models, outputting image-feature vectors, trained?
CC BY-SA 4.0
null
2023-06-01T12:47:41.963
2023-06-01T17:49:01.450
null
null
20909
[ "machine-learning", "tensorflow" ]
121894
1
null
null
1
25
My aim is to determine whether teacher salaries in all 50 of the United States over between 2013 and 2023—adjusted for inflation and the cost of living—differ significantly. I would like to ask what the wisest approach might be to modify unadjusted teacher salary averages (there is one average for each state) to account for these effects. Afterwards, I would like to graph these modified salaries for a few of these states and examine whether changes in revenue receipts within all schools in a particular state leads to a significant difference in average salaries. I am open to your insight on how I might best tweak teachers’ salaries to account for these effects and things I ought to consider when graphing the relationship I’ve described. Please bear in mind that I am referring to information from the National Education Association, which sources from public schools. Thank you!
Accounting for differences in average teacher salaries, adjusted for inflation and the cost of living in each state
CC BY-SA 4.0
null
2023-06-01T13:21:44.867
2023-06-01T15:56:49.677
2023-06-01T15:56:49.677
150398
150398
[ "dataset" ]
121895
1
null
null
0
19
how to determine the best strategy for my machine learning model? For instance, let's consider a scenario where I am working with linear regression and want to compare three different approaches. The first approach involves using all features as inputs, the second approach entails manually selecting the most correlated feature as input, and the third approach involves implementing Principal Component Analysis (PCA). Given these three approaches, is it appropriate to evaluate each one using k-fold cross-validation without retraining the model, and then compare the results of the cross-validation to determine the best approach without using test dataset ?
comparing different approaches in machine learning
CC BY-SA 4.0
null
2023-06-01T13:28:51.760
2023-06-02T14:34:39.003
2023-06-01T13:30:32.350
150389
150389
[ "machine-learning" ]
121896
1
null
null
0
16
Let's say in a particular field, the word `the` has a specific meaning and not just be a determination. The common `the` one and the specific `the` one are use mixedly in the corpus. Is there a way to handle this? Or manually tagging the specific ones before pre-processing is the only option? I've looked at the question [Detect if word is «common English» word or slang word](https://datascience.stackexchange.com/q/64136/119882) but it doesn't seem to answer this.
What to do when there is a jargon that is the same with a common word?
CC BY-SA 4.0
null
2023-06-01T13:49:09.683
2023-06-01T13:49:09.683
null
null
119882
[ "topic-model" ]
121897
2
null
121885
0
null
I think the term you are looking for is "[linear programming](https://en.wikipedia.org/wiki/Linear_programming)". The constraints you provided as an example are too broad, and they lead to an infinite number of solutions. However, you can play with [Wolfram Alpha's linear programming solver](https://www.wolframalpha.com/widgets/view.jsp?id=daa12bbf5e4daec7b363737d6d496120) to input your actual constraints to check what it gives you.
null
CC BY-SA 4.0
null
2023-06-01T14:01:52.107
2023-06-01T14:01:52.107
null
null
14675
null
121898
2
null
121892
0
null
The license for Llama here [https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/LICENSE](https://huggingface.co/decapoda-research/llama-7b-hf/blob/main/LICENSE) is from Meta, and doesn't require HuggingFace API.
null
CC BY-SA 4.0
null
2023-06-01T15:14:50.360
2023-06-01T15:14:50.360
null
null
146483
null
121899
1
null
null
1
8
I've been given a task by work to extract relevant disease and medication information from patient history case notes. There are about 5000 case notes, and they are about a paragraph long; They contain information on diseases/medication, family member's diseases, hospital admissions/scans and health behaviours (smoking, drinking, etc.). While I'm have some knowledge of how NLP works, I'm a statistician, not a data scientist. I see from ChatGPT, automation of this task is possible, but I'm unsure if it's something someone in my position is able to do. I'm aware of clincal BERT but I'm not sure how applicable it is to my problem. As far as I know, clinical BERT is a pretrained model for word embeddings, and I feel like this get's me some of the way towards something useful. I have some queries though. - Firstly, am I going down the right path in looking at something like clinical BERT - How would a model that is pretrained on a corpus of text handle text that it has not formed embeddings for, e.g., spelling errors, abbreviations and synonyms - If it can do the above, do I have to train it myself to define disease and medication with NER fine tuning - Does fine tuning involve providing a list of all disease and medications I expect to see in my case notes? Or, once it is fine tuned on a few examples, will it be able to identify unlisted diseases/medaction? I realise these questions are pretty superficial; I just need to know if it's worth going down this route; or, if the models that have the capacity of doing what I need are beyond what I can realistically develop, and I may as well get a head start on extracting this information manually. Any guidance would be really appreciated. Thanks!
Are there any prebuilt models that I can apply to electronic health records
CC BY-SA 4.0
null
2023-06-01T16:21:59.533
2023-06-01T16:21:59.533
null
null
150406
[ "nlp" ]
121900
1
null
null
0
8
I'm currently designing a methodology for implementing a supervised classification ML algorithm and seeking guidance to ensure I'm heading in the right direction. The problem I'm addressing involves measuring customer satisfaction for a service using the Net Promoter Score (NPS) obtained through customer feedback. The service can be evaluated based on two aspects: functional and non-functional. I have a set of features that belong to each category, with the non-functional features being relatively fewer and more challenging to construct. Additionally, I need to capture both types of features with a temporal dependency. I aim to train a classifier to predict each customer's monthly NPS. To achieve this, I intend to use data encompassing the features from the day the questionnaire was filled and the previous 30 days. Every month, I receive a new batch of labelled clients. Initially, I considered creating an input data matrix, let's call it `W_nxm`, where each row corresponds to a customer, and the columns represent aggregated features (e.g., averages, sums) for the desired time frame. Simultaneously, I would use a vector `y_nx1` to store the known customer scores, which would serve as the training data for my model. [](https://i.stack.imgur.com/zey6t.png) However, I started contemplating whether it would be beneficial to incorporate the time dimension directly rather than aggregating it. This curiosity has led me to explore convolutional neural networks (CNNs) and their success in tasks like image processing (2D) and video analysis (3D). In video analysis, time propagates the third dimension through 2D images. In my case, the signals are 1D (features). As a potential solution, I suggest transforming each 1D feature signal into images using grayscale or scalogram representations within the desired timeframe. Consequently, the input to my network would become a 3D matrix that describes the behaviour of each client. [](https://i.stack.imgur.com/GTFEd.png) Since I'm contemplating a different approach to constructing my input data, where time is not propagated per image but instead my features are, I would greatly appreciate any feedback or suggestions on this formulation. I'm open to hearing your thoughts and insights. Thank you in advance for your help!
Seeking Feedback on Methodology for Implementing Supervised Classification ML Algorithm for Customer Satisfaction Prediction
CC BY-SA 4.0
null
2023-06-01T16:41:21.220
2023-06-01T16:41:21.220
null
null
150404
[ "machine-learning", "deep-learning", "neural-network", "classification", "supervised-learning" ]
121901
2
null
121893
0
null
I would exclude any triplet or margin loss simply because they are too specific for metric and similarity learning among classes of entities: for example the triplet loss was designed for face recognition, so to measure the "distance" of two images, which is required to be low if these belong to the same identity. These image feature vectors models should be generally applicable to whatever downstream task (taking into account a possible fine-tuning), so, I'd say that these models are either: 1) trained with the method suggested by their own paper, 2) by also making use of modern practices, or even 3) by means of some self-supervised method so that to boost the representational power of the learned image vectors.
null
CC BY-SA 4.0
null
2023-06-01T17:49:01.450
2023-06-01T17:49:01.450
null
null
150390
null
121902
1
null
null
0
27
I have a problem. I have created a classification model for predicting data, and the problem is that the two classes are highly imbalanced. So, I dealt with it using the SMOTE+ENN technique. I applied SMOTE+ENN before splitting the data into training and test sets. The reason is that SMOTE generates synthetic data to balance the classes. I thought that performing SMOTE+ENN before splitting the data would create a representative state for the data What I want to ask is, I performed SMOTE+ENN first and then split the data into training and test sets. I am asking if my approach was correct based on ChatGPT's advice? Currently, I am conducting research for a journal article, and I am unable to modify the model. The only thing I can do is to provide supporting research or reasoning as to why SMOTE+ENN is performed before splitting the training and test data. Can you please help me with some supporting arguments or rationales for this approach Can I provide the following rationale: "Performing SMOTE+ENN before splitting the data can still be effective because it aims to create a more balanced situation in the dataset by generating synthetic data through SMOTE that resembles the original data but with different statistical values. This means that there will be new data points introduced. At the same time, ENN helps reduce the redundancy of samples close to the minority class. I have also set the parameter to increase the data by only 10% and decrease it by 10%, which is a minimal change. Therefore, the model's performance remains relatively unchanged, and the interpretation of model evaluation only slightly varies."
I have created a classification model for predicting data, and the problem is that the two classes are highly imbalanced
CC BY-SA 4.0
null
2023-06-01T17:59:53.883
2023-06-02T04:28:04.470
2023-06-02T04:28:04.470
150408
150408
[ "machine-learning", "class-imbalance", "smote" ]
121903
1
null
null
0
14
I'm trying to train my ML model with Svm.svc from sklearn, but it is taking so much time, it won't even train for once. This happens only when kernel function is used. Currently i selected 10 Features for my model to train, my model setting is, ``` SupportVectorMachineModel=svm.SVC(C=1, kernel='poly', gamma=1) ``` Kernel function should take time longer than usual but this feels unusual since my data set only have 360 around samples which is not much as well. Any ideas? suggestions? My Processor is Core i5, 1.6Ghz. (RAM:16g)
SVM taking too much time to train
CC BY-SA 4.0
null
2023-06-01T18:09:37.550
2023-06-01T18:09:37.550
null
null
95811
[ "machine-learning", "python", "classification", "scikit-learn", "machine-learning-model" ]
121904
1
null
null
0
9
Trying to create a basic machine scoring model, that takes in 4 parameters: - Number of maintenance events - Years of life left - Manufacturer support (bit - either yes or no) - Visual condition The weightings of the parameters are also undecided. However, we know [Years of life remaining] will be much higher than the others. [Maintenance events] will be next highest, followed by [Visual Condition], and then [MFG Support]. We have an idea of where a given machine should fall in terms of the score and what year to replace it. But the formulas I create never really reflect that, so the weightings and formula probably have to change in unknown ways. Additionally, the [Maintenance events] weighting will probably follow an s curve. A few maintenances isn't a big deal but as they accumulate, the maintenance score should drop faster and faster. But after some point where there has just been a ton of maintenance, it's score is already preetty bad and additional events wont drop the score much more. So it follows an S curve. In terms of % for weighting, we were thinking about 60% for [Years of life left], 35% for [Maintenance], and 5% for [Condition]. Then maybe alter those around to create a weighting for [Support status]. So we have it set to add up to 100% total. I am not sharing my formula because it is a bit random. Essentially, its just calculating the score for each based on the weighting (ex: for [Remaining Years of life], the max score it can get is 60.......if it's halfway through its life, it gets a score of 30). For [Maintenance], it is essentially calculated the same but with a s curve formula spat out by Excel. I'm really struggling to determine the correct weightings and how to create the formula. Is there a standard way to handle questions like this?
Help creating a SCORING model/formula.......am I on the right track?
CC BY-SA 4.0
null
2023-06-01T18:27:49.930
2023-06-01T18:27:49.930
null
null
150410
[ "predictive-modeling", "data-science-model", "excel", "scoring" ]
121905
2
null
121793
1
null
Can you try the following approach using OpenCV The logic behind this is: - Convert the image to grayscale. - Apply a Gaussian blur to the image to smooth out the edges. - Threshold the image to create a binary image where the hot-spots are white and the rest of the image is black. - Dilate the binary image to fill in any small holes in the hot-spots. - Find the contours in the dilated image. - Iterate over the contours and remove any that are not large enough. - Fill in all of the remaining contours with black. import cv2 def remove_hotspots(image): # Convert the image to grayscale gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) # Apply a Gaussian blur to the image to smooth out the edges blur = cv2.GaussianBlur(gray, (5, 5), 0) # Threshold the image to create a binary image where the hot-spots are white and the rest of the image is black threshold = cv2.threshold(blur, 128, 255, cv2.THRESH_BINARY)[1] # Dilate the binary image to fill in any small holes in the hot-spots dilated = cv2.dilate(threshold, None, iterations=2) # Find the contours in the dilated image contours, hierarchy = cv2.findContours(dilated, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE) # Iterate over the contours and remove any that are not large enough for i in range(len(contours)): if cv2.contourArea(contours[i]) < 100: continue # Fill in the contour with black cv2.drawContours(image, contours, i, 0, -1) return image if __name__ == "__main__": # Load the image image = cv2.imread("image.jpg") # Remove the hot-spots new_image = remove_hotspots(image) # Display the original and new images cv2.imshow("Original", image) cv2.imshow("New", new_image) cv2.waitKey(0) cv2.destroyAllWindows()
null
CC BY-SA 4.0
null
2023-06-01T18:47:34.170
2023-06-01T18:47:34.170
null
null
92050
null
121906
1
121917
null
0
8
I am attempting to determine the most useful bands of a multiband image classification (i.e. Red, Green, Blue, Near Infrared, etc. used for classifying pixels) and wrote the following function to build a decision tree. It uses [sci-kit learn's Decision Tree Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.feature_importances_) with entropy as the split criterion. Finally, it uses the [feature_importances_](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier.feature_importances_) function to calculate the importance of each band: ``` def make_tree(X_train, y_train): """prints a decision tree and an array of the helpfulness of each band""" dtc = DecisionTreeClassifier(criterion='entropy') dtc.fit(X_train, y_train) tree.plot_tree(dtc) plt.show() importances = dtc.feature_importances_ large_to_small_idx = np.argsort(importances)[::-1] for idx in large_to_small_idx: print(f"Band {idx + 1}: {importances[idx]}\n") ``` I assumed that since the splitting criterion on the decision tree was set to entropy that `feature_importances_` would also be calculated as some form of entropy information gain. However, in sci-kit learn's documentation it mentions how the feature importance is actually calculated: > The importance of a feature is computed as the (normalized) total reduction of the criterion brought by that feature. It is also known as the Gini importance. Is this an issue or is the feature importance essentially still being calculated based on entropy? If this is not a good way to calculate feature importance based on entropy, is there a way to tweak `feature_importances_` or some other method I am missing to do this? Thanks for the help!
Calculating feature importance with Scikit-Learn's Decision Tree Classifier
CC BY-SA 4.0
null
2023-06-01T18:53:48.353
2023-06-02T10:51:08.863
2023-06-01T20:01:52.713
150412
150412
[ "scikit-learn", "decision-trees" ]
121907
1
null
null
0
12
I have time series data. Dataset contains around 600.000 metrics. Each metric published daily and has three values, let's say 'count', 'number of something', 'length of something'. Looks this way: ``` name cnt_2023_05_31 num_2023_05_31 len_2023_05_31 cnt_2023_06_01 num_2023_06_01 len_2023_06_01 m_1 100 1000 10000(bad) 99 1002 10003 .... 600.000 of such metrics ``` So for each day metric gets three values. I have historical data where bad metrics are labeled. For example, length at 2023-05-31 was bad. I have quite naive approach to catch issues atm. Just set of rules with hand made thresholds. Rule example: ``` m_1 (metric name) length (values name) - should not increase or decrease in more than 1% comparing to previous value - or - should not increase or decrease in more than 42 comparing to previous value ``` Ofc, it's hard to maintain such rules since I have to periodically adjust threshold values and %. My thoughts: - can I periodically train some sort of ML model using labeled data? I know exactly how bad data points look like. I'm pretty sure there is also a correlation between many metrics. Could it help? - is there any statistical technique to automatically adjust thresholds in rules? Some sort of standard deviation calculated with sliding window (not good at math :( sorry if it sounds odd)? I've read people using z-score The ultimate business goals are: - zero false positives - zero rule thresholds maintenance.
Supervised time series anomaly detection
CC BY-SA 4.0
null
2023-06-01T19:08:16.560
2023-06-01T19:08:16.560
null
null
150411
[ "time-series", "statistics", "supervised-learning" ]
121908
2
null
121878
0
null
Here is the solution that worked for me. Downgrade your SMP to the version you were training at. Or train your model on a fresh SMP if resourses allow. This way you avoid legacy Python issues. Hence. This is NOT a pytorch issue. This is due to the change of the SMP models that caused issues with dackward compatibility.
null
CC BY-SA 4.0
null
2023-06-01T19:10:18.313
2023-06-01T19:10:18.313
null
null
82027
null
121909
1
null
null
0
17
I'm new to machine learning, and I'm working with some international head-to-head sports competition data. I used relational data creation techniques in tidyverse to join several data sources to create an event based dataset where each row is the outcome of a unique match up between 2 teams and their measurable traits, with the obvious goal of finding the importance of those traits on the outcome. Note = In general, I'm trying a few different ways to organize the data to get hands on experience with creating and analyzing data sets effectively, so if how I set it up isn't how you'd do it, don't go too hard on me. Specifically with have a repeat in variables based on home versus away, so that I could try to get all the event data into one and only one observation (example = "average_speed_home" and "average_speed_away"). I know and will try other data configurations, but any suggestions related to that would be nice, but it's not the main reason I'm asking for help. My main question is related to what I should do with ID variables that I used to create the data set in relation to data splitting for machine learning. I've read on a few posts that I should keep those variables for the data splitting because it could create bias if I don't. But, the ID variables I have, aren't really factors that I want to included when creating my models. Specifically, I used Home and Away variables for certain teams in the matchup, so I could pivot wider and include all the data for each event in one observation, but those team differences are already shown in the other variables that are dedicated to to either home or away for a certain trait, such as "average_speed_home" and "average_speed_away". The Home and Away variables now just say a Nationality. Since several of the yearly competitions are included in the data set, I wouldn't want to analyze the nationality impact on the results because the composition of the teams change quite frequently and the results might be biased due to recency of success (and also all the actual skills and performance metrics are included as their own variables). Also, I'm unsure about whether to leave the match # ID variable as well because each line is a unique value that is essentially just an observation count. Does the distribution of the data matter for data splitting when the data is event-based and standardized such that the only differences would be what actual National Teams faced each other in the matchup? And if I'm only looking for the numerical values of certain skills and their impact on the outcome, should I worry about including the ID variables when data splitting? TL;DR: I don't want to include certain ID variables for the model creation, but I'm unsure if getting rid of those variables before the data split would create bias, especially in standarized event-based data. What's the general rule of thumb for when to get rid of seemingly unimportant ID variables that were only used to create the relational data set? When is it better to get rid of them before the data split or after the data split? Thank you.
How to handle ID variables when splitting data for machine learning?
CC BY-SA 4.0
null
2023-06-01T19:16:57.140
2023-06-03T08:42:39.590
null
null
150409
[ "machine-learning", "dataset", "data-cleaning" ]
121910
1
null
null
0
6
So the output vector I'm training a ktrain model on is 2 dimensional (the outputs are ['Temp', 'Velo']. And ktrain is even able to compute these when I use predictor.predict (see attached image), but when I store it in a variable, it returns a completely different 1 dimensional array of the same length. [](https://i.stack.imgur.com/UALgx.png) Why is this happening? Where are those final 1 dimensional values even coming from? And how can I retrieve the actual 2 dimensional predicted output? Or will I necessarily have to train separate models for both vectors?
How do I get ktrain to return a multilabel output in tabular regression?
CC BY-SA 4.0
null
2023-06-01T20:14:05.537
2023-06-01T20:14:05.537
null
null
136334
[ "python", "neural-network", "regression", "jupyter", "multi-output" ]
121911
1
null
null
0
21
I am training ML regression models to predict financial returns in a high frequency trading environment. I have 1 time-series of intraday data for 40 years for 1 individual security at the moment. I have run a cross-validation (keeping in mind the timestamp when creating the folds). However, after I obtain the best model and get predictions on the test data, even if my overall gross returns are positive, I am trading too often so trading costs eat all my profits giving me negative returns. Thus, I need to find a way to select which days to trade based on the predicted returns. As I cannot apply the cross-sectional approach typical to low frequency strategies by only taking a position on the top X% of securities, I need to find a solution. Instead of fixing an a-priori lower-bound threshold, and trading only when the predicted returns are higher (in absolute terms), I would like to add this as an extra parameter to focus only on the ost profitable trades based on my predictions. I thought about the following approach: - Run cross-validation to select the best hyperparameter combination, thus obtaining the best model. - retrain the model with the best hyperparameters found in step 1 for each of the k folds (defined in the same way as before) creating a 'manual' cross-validation approach described as follows: For each fold, obtain the validation predicted returns, and after giving a list of lower-bound thresholds, compute the sharpe ratio (the scoring metric in my case) for each of the lower-bound thresholds. for each threshold, compute the mean of validation sharpe ratios across all k folds and pick the one with the highest value. I was wondering whether this reasoning is correct from a ML perspective, or whether I am increasing significantly the bias and the second validation should be run on another held-out dataset.
Is subsequent cross validation on the same dataset biased?
CC BY-SA 4.0
null
2023-06-01T22:33:11.567
2023-06-01T22:35:21.060
2023-06-01T22:35:21.060
150418
150418
[ "machine-learning", "cross-validation" ]
121912
1
null
null
0
14
I'm trying to determine a quantitative value by which a target variable change (inflation) by changing an indicator variable (interest rate). The industry basically uses linear models such as VAR. Are there state of the art approaches to capture such relationship?
What are different ways to determine how an explanatory variable affect a target variable?
CC BY-SA 4.0
null
2023-06-02T00:26:14.930
2023-06-02T05:11:40.433
null
null
150417
[ "machine-learning", "time-series", "linear-models", "causalimpact" ]
121913
2
null
109326
0
null
What you have is a classic case of high dimensional categorical data. Basically you have a lot of categorical features in your input, along with the target variable having a lot of classes. In this case I would suggest not using `OneHotEncoder` as it will further increase your dimensionality and result in poor predictions. Also you mentioned that there are no ordinal features. Hence using `OrdinalEncoder` won't make sense too. There are a lot of different types of encoders, not just OneHot and Ordinal and depending on your type of features, you can choose which one suits the best. [category-encoders](https://contrib.scikit-learn.org/category_encoders/) has a list of various encoders you can check out. I would suggest `CatBoostEncoder` or `LeaveOneOutEncoder` as they usually perform best on high dimensional data but try out all and see which works. Cheers!
null
CC BY-SA 4.0
null
2023-06-02T05:00:36.993
2023-06-02T05:00:36.993
null
null
119921
null
121914
2
null
121912
1
null
Yes there are multiple ways to check the relationship between the input and target variables. Some of which are: 1.) Correlation based techniques - You can use `Pearson`, `Spearman` or `Kendall` correlation depending on whether your variables are linearly or non linearly related. 2.) Use `mutual_info_gain` to check the dependency between 2 variables. 3.) Calculate the feature importance using machine learning models. There are other techniques you can use too. Google is your best friend! Cheers!
null
CC BY-SA 4.0
null
2023-06-02T05:11:40.433
2023-06-02T05:11:40.433
null
null
119921
null
121915
1
null
null
0
6
Currently I'm working on a ML project, just need an information, is there any tool that is present that can load audios file and generates spectrograms as well as an option to annotating/ label the spectrograms. I have thousands of audio data to label; I've found a tool (audacity) but I need to load each audio file and it's not giving min and max frequency after exporting labels. Any help would be highly appreciated. Thanks
Labelling spectrograms
CC BY-SA 4.0
null
2023-06-02T05:15:31.723
2023-06-02T05:15:31.723
null
null
150419
[ "machine-learning", "deep-learning", "preprocessing", "audio-recognition", "labelling" ]
121916
2
null
88454
0
null
I'm using [Paperspace Gradient](https://console.paperspace.com/signup?R=N5PXPJ5) to train my models. It provides all kinds of CPU and GPU machines, automated workflows and deployment with github integration. Setup might be confusing at first, but it is pretty well documented. It also has RAPIDS as a preconfigured template. [](https://i.stack.imgur.com/YYrB9.png)
null
CC BY-SA 4.0
null
2023-06-02T08:00:50.130
2023-06-02T08:00:50.130
null
null
150130
null
121917
2
null
121906
0
null
The entropy criterion is used by the CART algorithm to build the DT itself, by evaluating which split is actually the best (greedily) to split on. So, it's not directly related to feature importance which, in case of DTs, is computed as the reduction in impurity brought by a feature. This is not an error, it's just by design: actually, it is an extra capability that DTs have. You can also estimate feature importance with Random Forests and Extra Trees which should provide more accurate results since they compute that from an ensemble of models. Indeed, the way it's computed is still based on impurity reduction, which you can think of being a quantification of how much a feature improves the model's performance.
null
CC BY-SA 4.0
null
2023-06-02T10:51:08.863
2023-06-02T10:51:08.863
null
null
150390
null
121918
1
null
null
0
18
I have 188 different drilling datasets with two columns temperature and the target column thermal conductivity respectively. The drilling process includes drilling on four different materials on top of each other. Each different material has an unique thermal conductivity, so the target variable can only take one of 4 different values given the four materials. My question is, what kind of machine learning problem is this? Is it a regression or multi-classification problem?
Is this time-series problem more a multi-classification or regression problem?
CC BY-SA 4.0
null
2023-06-02T11:01:26.920
2023-06-02T15:13:08.560
2023-06-02T15:13:08.560
29169
145940
[ "machine-learning", "deep-learning", "classification", "time-series", "regression" ]
121919
2
null
121817
0
null
Thank you for your answer, I solved the puzzle meanwhile, but it took me a while. The [website from Akshaj Verma](https://towardsdatascience.com/pytorch-basics-how-to-train-your-neural-net-intro-to-rnn-cb6ebc594677) has helped me a lot. The main problem was that I was confused with usage of batch_first, sequence length, batch size and the organisation of the tensors (data, h0 and the result). First, I show you my code and then I explain it. ``` class DC_Network(nn.Module): def __init__(self, input_size, hidden_size, num_layers, output_size, sequence_length, batch_size): super(DC_Network, self).__init__() self.input_size = input_size self.hidden_size = hidden_size self.num_layers = num_layers self.batch_size = batch_size self.sequence_length = sequence_length self.output_size = output_size self.rnn = nn.RNN(input_size=self.input_size, hidden_size=self.hidden_size, num_layers = self.num_layers, batch_first=True, nonlinearity='relu') self.fc = nn.Linear(self.hidden_size * self.sequence_length, self.output_size ) def forward(self, x): inputs = x.unfold(dimension = 0,size = self.sequence_length, step = 1) # 2 if list(inputs.shape)[-1] == 4: inputs = torch.swapaxes(inputs,1,2) h0 = torch.ones(self.num_layers , self.batch_size, self.hidden_size) out, _ = self.rnn(inputs, h0) out = out.reshape(self.batch_size, self.hidden_size * self.sequence_length) # 5 out = self.fc(out) return out def train_model(self, train_data, train_labels, num_epochs, learning_rate=0.01): criterion = nn.MSELoss() optimizer = optim.Adam(self.parameters(), lr=learning_rate) for epoch in range(num_epochs): total_loss = 0 optimizer.zero_grad() output = self(train_data) loss = criterion(output, train_labels) total_loss += loss.item() loss.backward(retain_graph=True) optimizer.step() if (epoch + 1) % 10 == 0: print(f"Epoch [{epoch + 1}/{num_epochs}], Loss: {total_loss}") ``` ### Understanding sequence and batch_size For a RNN a sequence of the input values needs to be defined. The sequence can be an ordered list or different timestamps of the input data. A complete sequence is required to generate an output. The sequence length depends on the system. In PyTorch (and I think also in other neural network frameworks) a sequence for RNN is equivalent to one batch. (A batch is a subset of the data). The batch size defines how many batches are available in the input data or with other words in how many batches the input data can be divided (-which means for RNN in how many sequences the input data can be divided. ``` [a1, a2, a3], # 1. sequence of input a = 1. batch [a4, a5, a6], # 3. sequence of input a = 2. batch [a7, a8, a9], # 3. sequence of input a = 3. batch ] batch_size = len(batches) # 3 ``` The RNN creates for every element in a sequence 1 neuron (because every element needs to be stored with its previous value. In addition, the hidden_size can be independently defined, which multiplies every neuron. Therefore, the output size of the RNN is sequence_length * hidden_size and that's also the reason why a linear layer (fully connected, fc) is needed to reduce the system output to the actual size (output_size). ### Restructuring of the input_data In the next step, it is necessary to reorganise the input_data into batches/sequences. In the following example I have two inputs a and b. These inputs are organized as 2-dimensional tensor x. ``` x = [[ a1, b1], [ a2, b2], [ a3, b3], [ a4, b4], [ a5, b5], [ a6, b6], [ a7, b7], [ a8, b8], [ a9, b9], ... ] ``` This 2-dimensional tensor need to be reorganized in batches or sequences. This separations in batches creates an additional dimension to the tensor x, which brings us to the nn.RNN parameter batch_first=true. The batch or sequence must be identified by the first index of the reorganized 3-dimensional input data. The reorganization can be done with view or reshape (in pyTorch) or with unfold to create overlapping sequences. ``` inputs = x.view(self.batch_size, self.sequence_length, self.input_size) inputs = [ [[a1, b1], [a2, b2], [a3, b3] ], [[a4, b4], [a5, b5], [a6, b6] ], [[a7, b7], [a8, b8], [a9, b9] ] ] inputs = x.unfold(dimension = 0,size = self.sequence_length, step = 1) inputs = [ [[a1, b1], [a2, b2], [a3, b3] ], [[a2, b2], [a3, b3], [a4, b4] ], [[a3, b3], [a4, b4], [a5, b5] ], ... ] ``` If you use unfold with more than one input it is necessary to reorganize the shape of the inputs, because row and column of the sequence are swapped. ``` inputs = torch.swapaxes(inputs,1,2) ``` ### Initial hidden state The hidden state of a RNN stores the result of the last calculation. Therefore, for the first calculation an initial value is needed. What I didn't know is that in PyTorch a hidden state have to be created for every batch. This confused me completely, because normally (or my understanding of RNN) I have only one initial state for all batches. Hence, I initialized all hidden states for all batches equally. In addition the hidden state is also needed for every layer. ``` h0 = torch.ones(self.num_layers , self.batch_size, self.hidden_size) ``` ### Calculation of the output The the correct organized inputs and hidden states h0 the calculation of the outputs can be easily done. ``` out, _ = self.rnn(inputs, h0) ``` The _ is the placeholder for the new hidden_state_values which I don't use in my case. For every epoch I initialize the hidden state new. However, as [Akshaj Verma](https://towardsdatascience.com/pytorch-basics-how-to-train-your-neural-net-intro-to-rnn-cb6ebc594677) mentioned on his side the out tensor's organisation is independent to the parameter batch_first, [see issue 4145](https://github.com/pytorch/pytorch/issues/4145). Therefore, the out shape has to be reorganized bevor it can be used in the linear layer. ``` out = out.reshape(self.batch_size, self.hidden_size * self.sequence_length) ``` Now, with this implementation I was able to learn successfully the dynamic behavior of a DC motor.
null
CC BY-SA 4.0
null
2023-06-02T11:06:46.070
2023-06-02T11:06:46.070
null
null
150212
null
121920
2
null
121760
0
null
I'm asking a second question on this topic ((RNN with PyTorch - I don't understand the initial parameters)[https://datascience.stackexchange.com/questions/121817/rnn-with-pytorch-i-dont-understand-the-initial-parameters/121919#121919]) and the answer answers those questions as well.
null
CC BY-SA 4.0
null
2023-06-02T11:13:31.630
2023-06-02T11:13:31.630
null
null
150212
null
121921
1
null
null
0
28
I have created a classification model for predicting data, and the problem is that the two classes are highly imbalanced I have a problem. I have created a classification model for predicting data, and the problem is that the two classes are highly imbalanced. So, I dealt with it using the SMOTE+ENN technique. I applied SMOTE+ENN before splitting the data into training and test sets. The reason is that SMOTE generates synthetic data to balance the classes. I thought that performing SMOTE+ENN before splitting the data would create a representative state for the data Currently, I am conducting research for a journal article, and I am unable to modify the model. The only thing I can do is to provide supporting research or reasoning as to why SMOTE+ENN is performed before splitting the training and test data. Can you please help me with some supporting arguments or rationales for this approach Example:Can I provide the following rationale: "Performing SMOTE+ENN before splitting the data can still be effective because it aims to create a more balanced situation in the dataset by generating synthetic data through SMOTE that resembles the original data but with different statistical values. This means that there will be new data points introduced. At the same time, ENN helps reduce the redundancy of samples close to the minority class. I have also set the parameter to increase the data by only 10% and decrease it by 10%, which is a minimal change. Therefore, the model's performance remains relatively unchanged, and the interpretation of model evaluation only slightly varies."
Is there any rationale for performing SMOTE-ENN before train-test-split?
CC BY-SA 4.0
null
2023-06-02T11:43:01.193
2023-06-02T15:29:26.997
2023-06-02T12:42:58.843
52291
150408
[ "machine-learning", "dataset", "class-imbalance", "smote", "imbalanced-data" ]
121922
1
null
null
0
14
I have a dataset of weekly shortage data and categorical features such as origin and product type and numerical side I have demand qty. this weekly shortage data is for each product type and for the specific origin. I don't know how to proceed it as a time series model or regression model kindly help
Forecasting Model on Cross sectional data
CC BY-SA 4.0
null
2023-06-02T12:26:28.783
2023-06-02T13:50:13.003
null
null
150430
[ "time-series" ]
121923
2
null
121921
0
null
You should not apply SMOTE-ENN before splitting. It has two big issues: - Adding synthetic data in the test set will change the distribution of the data and the metrics you measure will not be representative of the true distribution. - It will introduce a data leak. The fact that SMOTE-ENN will create data based on the entire dataset means that the training data of the model includes information about the test data. So I would not try to rationalize it and try to fix the issue instead.
null
CC BY-SA 4.0
null
2023-06-02T12:41:33.390
2023-06-02T12:50:52.793
2023-06-02T12:50:52.793
52291
52291
null
121924
2
null
121922
0
null
Seems like you do not have any information about the specific dates; so this won't be a time series problem; it would be a regression model. If you would have the dates information for each week in the dataset then it would have been a time series problem.
null
CC BY-SA 4.0
null
2023-06-02T13:50:13.003
2023-06-02T13:50:13.003
null
null
144743
null
121925
1
null
null
0
10
I am wondering how to approach a project, where I would like to increase the number of output classes of an already trained network. I have very good reason to believe that the model has already learnt the relevant information to be able to predict this new class, that is why I would aim only for fine-tuning (also, I have a lot less data for this class than for the rest and I do not have the hardware to train from scratch). The model that I want to use is a transformer, where the decoder's final layers are 2 fully connected layers and a layernorm. To my understanding, new classes to a network can be added by freezing the model except for the final layer(s), increase the output dimension and fine-tune only this part of the network. Is this a reasonable approach? If yes, do you usually take the weights for these layers and just increase the size of the weight matrix with some random weights (or just try both, and see which one gives better results)?
Can I add a new output class to a decoder and train only the final layer?
CC BY-SA 4.0
null
2023-06-02T14:25:31.930
2023-06-02T14:33:58.510
null
null
33095
[ "classification", "transformer", "transfer-learning", "finetuning" ]
121926
2
null
121925
0
null
Yes, we call this [transfer learning or fine tuning](https://www.tensorflow.org/tutorials/images/transfer_learning).
null
CC BY-SA 4.0
null
2023-06-02T14:33:58.510
2023-06-02T14:33:58.510
null
null
113067
null
121927
2
null
121895
0
null
If you want to compare your different models it is essential to have appropriate evaluation techniques and to perform the same method on all models to make them comparable. In your scenario, the approach you mentioned evaluating using the [k-fold cross-validation](https://databasecamp.de/en/ml/cross-validations) is definitely appropriate. However, keep in mind that it would be even better to still have a separate test set (not part of the [cross-validation](https://databasecamp.de/en/ml/cross-validations)) to have a final evaluation. A slightly optimized approach would be: - If possible, split your data set into a training, validation, and test set. - Train your three different models (linear regression with all features, linear regression with correlated features and linear regression with PCA features). - Evaluate the performance on the k-fold cross-validation of the validation set. - Evaluate the performance on the new, unseen test set. That way, you have two ways of comparing the approaches.
null
CC BY-SA 4.0
null
2023-06-02T14:34:39.003
2023-06-02T14:34:39.003
null
null
130460
null
121928
1
null
null
0
8
While trying to export data from a pandas dataframe to a CSV, we noticed that carriage returns (inserted at the middle of a line or at the end of a line) produce outputs that are different from newlines. In particular, strings that are usually embraced by quotes in the standard CSV output, when the string contains a \n, do not receive the quotes anymore if a carriage return (\r) is placed. What is the standard practice to deal with \n and \r when exporting from a pandas dataframe to CSV and when importing from a CSV to a dataframe? Is there a special way to set line terminators that accommodate the differences between \n and \r? ``` import pandas as pd df = pd.DataFrame(data = [[1,"Hi\rHello"], [2, "example"]], columns = ["id", "msg"]) df.to_csv("df.csv", index = False) ``` The above example produces an output without quotes. This is a problem when reading the CSV, as we will have a new line in the file that will be treated as a new observation to be treated independently of the previous lines. The example below produces an outpupt with quotes, that does not cause this problem. ``` import pandas as pd df = pd.DataFrame(data = [[1,"Hi\rHello"], [2, "example"]], columns = ["id", "msg"]) df.to_csv("df.csv", index = False) ```
what is the best practice to handle carriage returns when exporting data from pandas dataframe to CSV?
CC BY-SA 4.0
null
2023-06-02T14:46:09.397
2023-06-02T15:09:39.127
2023-06-02T15:09:39.127
29169
150437
[ "pandas", "dataframe", "data-formats" ]
121929
2
null
90312
0
null
First of all, the paper only implies that under certain conditions, neural networks and kernel functions exhibit an equivalence. This is because the [neural networks](https://databasecamp.de/en/ml/artificial-neural-networks) with their hidden layers converge during the training process to some optimal weight values for each [neuron](https://databasecamp.de/en/ml/perceptrons). These values align with a kernel matrix which means that their representations can be approximated by a kernel function. Take for example [support vector machines (SVMs)](https://databasecamp.de/en/ml/svm-explained) where the boundary between classes is determined by the similarity/kernel function between the inputs. A [neural network](https://databasecamp.de/en/ml/artificial-neural-networks) would learn this through complex non-linear transformations using their [activation function](https://databasecamp.de/en/ml/softmax-function).
null
CC BY-SA 4.0
null
2023-06-02T14:48:47.507
2023-06-02T14:48:47.507
null
null
130460
null
121930
1
null
null
1
18
Why my model shows metrics like this? While my model was training recall and precision was equal to zero? I trying to do binary classification of mushrooms [edible, poisonous]. I have CNN model with some dropout and batch normalization. Dataset have ~7000 img, ration is equal to both class. Python version is 3.10 Tensorflow version is 2.12 [](https://i.stack.imgur.com/7Qe0x.png)
precision and recall is zero
CC BY-SA 4.0
null
2023-06-02T14:57:44.823
2023-06-02T15:23:19.183
null
null
150438
[ "tensorflow", "convolutional-neural-network", "binary-classification" ]
121931
2
null
121930
0
null
It seems like your model is not generalizing properly. During training its performance on the traning set increases as it should but it fails to capture general properties in your data that would make it able to predict data in your validation dataset. It also seems to somewhat classify everything in your validation dataset into either of the classes, eg "everything is edible". This can have various reasons, the most obvious are that your model is not suited for this task or that you don't have enough data. See Wikipedia for a more in-depth explanation: [https://en.wikipedia.org/wiki/Overfitting](https://en.wikipedia.org/wiki/Overfitting)
null
CC BY-SA 4.0
null
2023-06-02T15:23:19.183
2023-06-02T15:23:19.183
null
null
141828
null
121932
2
null
121921
0
null
Unfortunately, we cannot really find a suitable reasoning, because the process is faulty. This is a common misconception in imbalanced data, however. Resampling methods should only be applied to the training partition, the test set must remain untouched and unseen until final validation. [Take a look at this paper](https://miriamspsantos.github.io/pdf-files/IEEE-CIM-Version.pdf), it explains the issue thoroughly and inclusively evaluates the effects of doing the split before and after (including using SMOTE-ENN).[](https://i.stack.imgur.com/yBC7i.png)
null
CC BY-SA 4.0
null
2023-06-02T15:29:26.997
2023-06-02T15:29:26.997
null
null
150440
null
121933
1
null
null
0
13
I'm currently learning Convolutional Neural Networks and am stuck on trying to figure out how to compute gradients in a layer that uses transposed convolution. Also, how do I calculate the gradients if I use padding=1 and stride=2? Thanks to this article "https://hideyukiinada.github.io/cnn_backprop_strides2.html" I was able to figure out how to calculate gradients in normal convolution and all that remains for me is to figure out how to calculate them in transposed convolution.
How to backpropagate transposed convolution?
CC BY-SA 4.0
null
2023-06-02T15:37:44.290
2023-06-02T15:37:44.290
null
null
150441
[ "cnn", "backpropagation" ]
121935
1
null
null
0
12
Jensen Shanon divergence is mainly used to determine the divergence between two probability distributions. Can it be used to calculate the difference between the deep feature vectors of two images? I have used the image deep feature vectors directly as a probability distribution to check the similarity between two images and the results are good. Next, I also normalized the feature vectors by dividing them by the sum of the vector, to convert them to some form of probability distribution. The results were the same as earlier. But I am unable to get the intuition behind the "good" performance. Can JSD be used as a comparison metric of deep features?
Intuition behind Jensen Shannon divergence between deep features of two images?
CC BY-SA 4.0
null
2023-06-02T17:45:57.020
2023-06-02T17:45:57.020
null
null
150445
[ "deep-learning", "probability", "distribution", "features" ]
121936
1
null
null
0
13
My eyes are on fire. I checked the documents, there is just an option to disable all outputs, and I just need to clean this dirty yellow cells. Thanks for any help. [](https://i.stack.imgur.com/ClMIs.png)
How to change this humorous output style in pycaret+colab?
CC BY-SA 4.0
null
2023-06-02T17:49:28.870
2023-06-02T17:49:28.870
null
null
150282
[ "colab", "pycaret" ]
121937
1
null
null
0
18
My task is to create a QA-model. I give it a context and a question that it should answer. The answer is usually one word, so a very simplified input would be e.g. Context: "Max eats a banana. Now Max and Tim are going to the gym. While Max eats a banana on the way, Tim eats some chocolate." Question: "What does Max eat?" Answer: "(a) banana(.)" I use a pretained model that predicts start and end tokens. To be more precise, it assigns a value to each of the tokens. Assuming each word would be tokenized into 1 token, I would have 2 vectors of length ~30, representing how likely it is that the answer to the question starts or ends at token x. This is how the predictions for the start_tokens could look like: X = [10, 3, -4, 1, ..., 9, ...] where 10 and 9 are the index of the 2 relevant occurrences of "Max" The training data is also labeled with those start and end indices (on char level, but I can easily transform it to token level). I thought of a comparison matrix like Y = [10, -10, -10, -10, ..., 10, ...], basically assigning all valid start indices the value 10 and all the rest -10. This does not perform well, tho. I tried many different loss functions on this setup but everything fails. Any ideas on how to get this running? Am I maybe on the completely wrong track here with what I'm doing?
Which loss function should I use if multiple values are correct?
CC BY-SA 4.0
null
2023-06-02T19:57:42.090
2023-06-02T19:57:42.090
null
null
150447
[ "machine-learning", "classification", "nlp", "loss-function", "model-evaluations" ]
121938
1
null
null
0
7
I'm working with a really high-cardinality feature as one of the inputs to my model and I'm using hash-encoded feature embedding rather than one-hot encoding. However, this method is ignoring the frequency of each category in each sample. As an analogy - Imaging representing a document as an embedding of list of topics. Each topic has a certain score associated representing how strongly it appears in the document. The possible list of topics is large. An option for a smaller cardinality would be to have a multi-hot list where each index represented a category and then value would represent a score: ``` [0 0 0.1 0 0.2 0 ...] 0 1 2 3 4 5 ... ``` However, the dataset a really high cardinality (~100M). I'm looking for ways I can use a list of both - categories and corresponding values and create a low-dim embedding.
High-Cardinality Categorical Feature with frequency score
CC BY-SA 4.0
null
2023-06-02T20:38:56.530
2023-06-02T23:43:43.237
2023-06-02T23:43:43.237
13386
13386
[ "word-embeddings", "embeddings" ]
121939
1
null
null
0
4
I am trying to figure out the best approach for my prediction task. I have a dataset with four variables: year ranging from 2010 to 2022, categorical variables $A$ and $B$, and numeric target variable T. I have numeric data that describes each category in $A$ and $B$, and can be used as embeddings for these instead of the raw categories. Not all categories in $A$ and $B$ occur every year, in fact most combinations occur over only one to two years. The average of my target $T$ seems to show a strong increasing trend. The goal of my problem is to predict target $T$ for future years for a new data sample. The question is: how can I capture the global trends in $T$ over time while predicting using $A$ and $B$? Time agnostic models like random forests and boosting would capture the dependencies between $A$,$B$ and $T$ but are not known to capture time trends well. On the other hand, since most $A$x$B$ combinations have data for only one year, I am not sure how I would use time sequence based methods like ARIMA or LSTM. What approach should I take to my problem? Any help would be greatly appreciated! PS: My test set may contain unseen categories for A and B, so use of the numeric embeddings is a must. (Cross posted from stats.stackexchange :) )
Trying to capture global time dependences for prediction with few time steps
CC BY-SA 4.0
null
2023-06-02T20:50:46.387
2023-06-02T20:50:46.387
null
null
84451
[ "time-series", "regression", "forecasting" ]
121940
1
null
null
0
4
I'm working on implementing a Gaussian Mixture Model (GMM) for three-way data (i.e., a set of matrices) in R. The GMM is being estimated using the Expectation-Maximization (EM) algorithm. However, I'm encountering an issue during the Expectation (E) step of the algorithm. In the E step, I'm calculating the posterior probabilities (responsibilities) for each data matrix belonging to each component of the mixture model. These are calculated as the product of the matrix-normal density of the data matrix given the parameters of the component and the mixing weight of the component, divided by the sum of these products over all components. The issue is that when I sum the posterior probabilities over all components for each data matrix (using colSums(comp.post)), I'm not getting a result of 1 for all data matrices, as I would expect. Here's the relevant portion of the code: ``` #' Expectation Step of the EM Algorithm for Gaussian Mixture Model #' with three-way data (a 3D array where each matrix is a data point). #' Calculate the posterior probabilities (soft labels) that each component #' has to each data matrix using matrix-normal distribution. #' #' @param Y Three-dimensional array containing the data, where each matrix represents a data point. #' @param M Array containing the mean matrix of each component, where each matrix represents a component. #' @param Phi List containing the row covariance matrix of each component. #' @param Omega List containing the column covariance matrix of each component. #' @param alpha Vector containing the mixing weights of each component. #' @return Named list containing the loglik and posterior.df for each data matrix e_step <- function(Y, M, Phi, Omega, alpha) { # Number of components in the mixture n_clusters <- length(alpha) # Number of matrices in the dataset n_matrices <- dim(Y)[3] # Calculate the log of the matrix-normal density for each matrix in Y for each component # and add the log of the mixing weight of the component log_comp.prod <- array(dim = c(n_clusters, n_matrices)) for (i in 1:n_clusters) { for (j in 1:n_matrices) { log_comp.prod[i, j] <- matrixNormal::dmatnorm(Y[,,j], M[,,i], Phi[,,i], Omega[,,i]) + log(alpha[i]) } } # Subtract the max to avoid overflow when exponentiating log_comp.prod <- log_comp.prod - max(log_comp.prod) # Calculate the log of the total density for each matrix in Y log_sum.of.comps <- log(colSums(exp(log_comp.prod))) # Subtract the log of the total density from the log of the density for each component # to get the log of the posterior probabilities (responsibilities) log_comp.post <- log_comp.prod - log_sum.of.comps # Exponentiate to get back to the original scale comp.post <- exp(log_comp.post) # Calculate the log-likelihood as the sum of the log of the total densities loglik <- sum(log_sum.of.comps) return(list("loglik" = loglik, "posterior.df" = comp.post)) } ``` Here what I got: ``` colSums(comp.post) [1] 3.739240e+01 6.498986e-02 9.844537e+00 3.475485e+00 9.047767e+03 2.196267e-01 [7] 2.079427e+01 1.165800e+02 1.405744e-01 1.353819e+01 2.433372e+00 2.051357e+00 [13] 4.472772e+00 3.597247e-02 3.210629e-01 8.761967e-01 4.396359e+01 3.265571e+02 [19] 1.247715e+02 3.616610e-02 4.361902e-01 2.035783e-02 5.585075e+01 3.328536e+00 [25] 5.880054e+00 2.166311e-01 5.388875e+02 3.931191e+01 1.642435e+00 5.129309e-01 ``` I've tried different ways to compute the matrix-normal density, including using the matrixNormal::dmatnorm() function with log = FALSE and log = TRUE, and writing a custom function to compute the density. However, none of these approaches have resolved the issue. I use this formula for my calculations: [](https://i.stack.imgur.com/UFooN.png) I'm not sure what's causing this issue or how to fix it. Any insights or suggestions would be greatly appreciated. Thank you!
Expectation Step in Gaussian Mixture Model for Matrix Data Not Producing Proper Posterior Probabilities
CC BY-SA 4.0
null
2023-06-02T20:56:46.153
2023-06-02T20:56:46.153
null
null
148053
[ "machine-learning", "r", "clustering", "gaussian", "gmm" ]
121941
1
null
null
0
6
I am doing some time series prediction. I am using the historical data and temperature to predict the energy for the next three days. The way I create the training data is same as LSTM, i.e I'm using a sliding window through the time series data. Overall, the prediction is not bad. But, at the first hours of the day the prediction is not good as the other hours in the day. I provide a photo of the predition and the real value. I also show the time which I talk about it. Maybe my question is so dumb. But, do you have any idea how I can improve this error at the first hour of the day? Any other features can play a role here to solve this?[](https://i.stack.imgur.com/YZylC.png)
Time series prediction is not working at the first step of test data
CC BY-SA 4.0
null
2023-06-02T21:06:30.953
2023-06-02T21:06:30.953
null
null
91060
[ "machine-learning", "deep-learning", "time-series" ]
121942
1
null
null
0
3
I am using LSTM to classify the origin of people's names. The input data is not balanced over target classes, so I used oversampling to balance it. [](https://i.stack.imgur.com/DUvg6.png) Now, I defined a simple LSTM model as follows: ``` LSTM_with_Embedding( (embedding): Embedding(32, 10, padding_idx=31) (lstm): LSTM(10, 32, batch_first=True) (linear): Linear(in_features=32, out_features=18, bias=True) ) ``` First, I trained this model using the unbalanced data, and got the following results: [](https://i.stack.imgur.com/oQayl.png) Clearly, I am in the overfitting zone. Before I fix this overfitting, I used the same model, but now trained with the balanced training dataset, I kept the test-set unbalanced only to represent the actual world. And the performance is: [](https://i.stack.imgur.com/DQg86.png) How can the performance be so good? Why not overfit now?
LSTM, seq to classification, why training on balanced data set yields such a good result?
CC BY-SA 4.0
null
2023-06-02T22:27:30.977
2023-06-02T22:27:30.977
null
null
124968
[ "neural-network", "lstm", "overfitting", "oversampling" ]
121943
1
null
null
0
21
I trained NLP models. This is a subset (200 instances) of my data set of 10,000 instances:[This the link of the dataset on pastebin](https://pastebin.com/FThmWXeE) I compare an LSTM model with a glove model and a BERT model. I expected a good performance with BERT. I can't get past 20% accuracy with BERT at all. I wonder what I'm missing in its implementation. ``` !pip list #tensorflow 2.12.0 !python --version #python 3.10.11 import json import tensorflow as tf import numpy as np from hyperopt import Trials, STATUS_OK, tpe from sklearn.model_selection import train_test_split from keras.layers import Input from sklearn.metrics import accuracy_score import pandas as pd # Reading of file f = open ('sampled_data.json', "r") data = json.loads(f.read()) ``` # Data preprocessing ``` X=[x["title"].lower() for x in data] y=[x["categories"][0].lower() for x in data] X_train,X_test,y_train,y_test= train_test_split(X,y, test_size=0.2, random_state=42) ``` ### target preprocessing. To consider the category unknown if not seen in test set ``` cat_to_id={'<UNK>':'0'} for cat in y_train: if cat not in cat_to_id: cat_to_id[cat]=len(cat_to_id) #MAPPING WITH RESPECT TO THE TRAINING SET id_to_cat={v:k for k,v in cat_to_id.items()} def preprocess_Y(Y,cat_to_id): res=[] for ex in Y: if ex not in cat_to_id.keys(): res.append(cat_to_id['<UNK>']) else: res.append(cat_to_id[ex]) return np.array(res) y_train_id=preprocess_Y(y_train,cat_to_id) y_test_id=preprocess_Y(y_test,cat_to_id) y_test_id=y_test_id.astype(float) # Tokenization of of features tokenizer=tf.keras.preprocessing.text.Tokenizer(num_words=10000) tokenizer.fit_on_texts(X_train) # TEXT TO SEQUENCE X_train_seq=tokenizer.texts_to_sequences(X_train) X_test_seq=tokenizer.texts_to_sequences(X_test) #PADDING pad_sequences function transform in array max_len=max([len(length) for length in X_train_seq]) X_train_pad= tf.keras.preprocessing.sequence.pad_sequences(X_train_seq,maxlen=max_len, truncating='post') X_test_pad= tf.keras.preprocessing.sequence.pad_sequences(X_test_seq,maxlen=max_len, truncating='post') ####### RECCURRENT NEURAL NETWORK############### vocab_size=len(tokenizer.word_index) Embed_dim=300 dropout=0.2 dense_size=128 num_cat=len(cat_to_id) batch_size=16 epochs=15 ### CREER LE MODELE model_rnn=tf.keras.models.Sequential() # Add an embedding layer model_rnn.add(tf.keras.layers.Embedding(input_dim=vocab_size, output_dim=Embed_dim, input_length=max_len)) # Add an LSTM layer model_rnn.add(tf.keras.layers.LSTM(units=128)) model_rnn.add(tf.keras.layers.Dropout(0.4)) # Dense + activation model_rnn.add(tf.keras.layers.Dense(units=dense_size,activation='relu')) #Classifieur + activation model_rnn.add(tf.keras.layers.Dense(units=num_cat,activation='softmax')) print(model_rnn.summary()) model_rnn.compile(loss= 'sparse_categorical_crossentropy', optimizer='adam', metrics='accuracy') model_rnn.fit(X_train_pad,y_train_id, batch_size=batch_size, epochs=epochs) model_rnn.evaluate(X_test_pad, y_test_id) Epoch 1/15 9/9 [==============================] - 5s 166ms/step - loss: 3.6699 - accuracy: 0.1643 Epoch 2/15 9/9 [==============================] - 1s 128ms/step - loss: 3.3861 - accuracy: 0.2286 Epoch 3/15 9/9 [==============================] - 1s 157ms/step - loss: 3.1313 - accuracy: 0.2357 Epoch 4/15 9/9 [==============================] - 1s 88ms/step - loss: 3.0774 - accuracy: 0.2286 Epoch 5/15 9/9 [==============================] - 1s 127ms/step - loss: 3.0358 - accuracy: 0.2286 Epoch 6/15 9/9 [==============================] - 0s 27ms/step - loss: 2.9461 - accuracy: 0.2286 Epoch 7/15 9/9 [==============================] - 0s 27ms/step - loss: 2.7970 - accuracy: 0.2357 Epoch 8/15 9/9 [==============================] - 1s 75ms/step - loss: 2.5048 - accuracy: 0.2429 Epoch 9/15 9/9 [==============================] - 1s 86ms/step - loss: 2.2543 - accuracy: 0.3357 Epoch 10/15 9/9 [==============================] - 1s 47ms/step - loss: 1.9985 - accuracy: 0.4357 Epoch 11/15 9/9 [==============================] - 0s 39ms/step - loss: 1.7728 - accuracy: 0.4929 Epoch 12/15 9/9 [==============================] - 1s 41ms/step - loss: 1.5552 - accuracy: 0.5929 Epoch 13/15 9/9 [==============================] - 0s 11ms/step - loss: 1.3320 - accuracy: 0.5929 Epoch 14/15 9/9 [==============================] - 0s 11ms/step - loss: 1.1506 - accuracy: 0.6786 Epoch 15/15 9/9 [==============================] - 0s 42ms/step - loss: 0.9498 - accuracy: 0.7714 2/2 [==============================] - 1s 13ms/step - loss: 6.6335 - accuracy: 0.2000 ############# MODEL WITH GLOVE################### embeddings_index = {} f = open('glove.6B.300d.txt', encoding='utf-8') for line in f: values = line.split() word = values[0] coefs = np.asarray(values[1:], dtype='float32') embeddings_index[word] = coefs f.close() # Create embedding matrix word_index=tokenizer.word_index num_words = len(word_index) + 1 embedding_dim = 300 embedding_matrix = np.zeros((num_words, embedding_dim)) for word, i in word_index.items(): embedding_vector = embeddings_index.get(word) if embedding_vector is not None: embedding_matrix[i] = embedding_vector num_words=len(tokenizer.word_index)+1 embedding_dim=300 max_len=max([len(length) for length in X_train_seq]) dense_size=128 num_cat=len(cat_to_id) batch_size=16 epochs=7 num_classes=len(cat_to_id) # Create the model model_glove = tf.keras.models.Sequential() model_glove.add(tf.keras.layers.Embedding(input_dim=num_words, output_dim=embedding_dim, input_length=max_len, weights=[embedding_matrix] )) #model_glove.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(units=128))) model_glove.add(tf.keras.layers.LSTM(units=128)) model_rnn.add(tf.keras.layers.Dropout(0.2)) model_glove.add(tf.keras.layers.Dense(num_classes, activation='softmax')) # Compile the model model_glove.compile(loss='sparse_categorical_crossentropy', optimizer='adam', metrics=['accuracy']) # Train the model model_glove.fit(X_train_pad,y_train_id , epochs=epochs, batch_size=batch_size) model_glove.evaluate(X_test_pad, y_test_id) Epoch 1/7 9/9 [==============================] - 4s 169ms/step - loss: 3.5065 - accuracy: 0.1714 Epoch 2/7 9/9 [==============================] - 1s 148ms/step - loss: 2.9357 - accuracy: 0.2357 Epoch 3/7 9/9 [==============================] - 1s 152ms/step - loss: 2.5611 - accuracy: 0.2929 Epoch 4/7 9/9 [==============================] - 1s 108ms/step - loss: 2.1017 - accuracy: 0.4286 Epoch 5/7 9/9 [==============================] - 1s 116ms/step - loss: 1.5988 - accuracy: 0.6071 Epoch 6/7 9/9 [==============================] - 1s 88ms/step - loss: 1.0982 - accuracy: 0.7571 Epoch 7/7 9/9 [==============================] - 1s 67ms/step - loss: 0.7189 - accuracy: 0.8786 2/2 [==============================] - 1s 11ms/step - loss: 3.7847 - accuracy: 0.1833 ########### MODEL WITH BERT################## pip install tensorflow keras transformers from transformers import BertTokenizer tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # from tensorflow.keras.preprocessing.sequence import pad_sequences max_sequence_length=100 # Tokenization and adding special toens X_train_encoded = [tokenizer.encode(X_train, add_special_tokens=True) for text in X_train] # Padding input_ids = pad_sequences(X_train_encoded, maxlen=max_sequence_length, padding='post', truncating='post') num_classes=len(cat_to_id) inputs = tf.keras.Input(shape=(max_sequence_length,), dtype=tf.int32) import tensorflow as tf from transformers import BertTokenizer, TFBertModel from tensorflow.keras.layers import Dense from tensorflow.keras.preprocessing.sequence import pad_sequences # Define and compile the model bert_model = TFBertModel.from_pretrained('bert-base-uncased') inputs = tf.keras.Input(shape=(max_sequence_length,), dtype=tf.int32) outputs = bert_model(inputs)[1] outputs = Dense(num_classes, activation='softmax')(outputs) model = tf.keras.Model(inputs=inputs, outputs=outputs) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # Train the model model.fit(x=input_ids, y=y_train_id, epochs=20, batch_size=64) # For prediction, preprocess the input in the same way tokenized_inputs_test = [tokenizer.tokenize(text) for text in X_test] input_ids_test = [tokenizer.convert_tokens_to_ids(tokens) for tokens in tokenized_inputs_test] input_ids_test = pad_sequences(input_ids_test, maxlen=max_sequence_length, padding='post', truncating='post') # Evaluate the model loss, accuracy = model.evaluate(x=input_ids_test, y=y_test_id) Epoch 1/20 3/3 [==============================] - 3s 947ms/step - loss: 3.2514 - accuracy: 0.2062 Epoch 2/20 3/3 [==============================] - 3s 953ms/step - loss: 3.2550 - accuracy: 0.2062 Epoch 3/20 3/3 [==============================] - 3s 950ms/step - loss: 3.2695 - accuracy: 0.2062 Epoch 4/20 3/3 [==============================] - 3s 957ms/step - loss: 3.2598 - accuracy: 0.2062 Epoch 5/20 3/3 [==============================] - 3s 958ms/step - loss: 3.2604 - accuracy: 0.2062 Epoch 6/20 3/3 [==============================] - 3s 953ms/step - loss: 3.2649 - accuracy: 0.2062 Epoch 7/20 3/3 [==============================] - 3s 948ms/step - loss: 3.2507 - accuracy: 0.2062 Epoch 8/20 3/3 [==============================] - 3s 940ms/step - loss: 3.2564 - accuracy: 0.2062 Epoch 9/20 3/3 [==============================] - 3s 932ms/step - loss: 3.2727 - accuracy: 0.2062 Epoch 10/20 3/3 [==============================] - 3s 944ms/step - loss: 3.2611 - accuracy: 0.2062 Epoch 11/20 3/3 [==============================] - 3s 930ms/step - loss: 3.2527 - accuracy: 0.2062 Epoch 12/20 3/3 [==============================] - 3s 923ms/step - loss: 3.2578 - accuracy: 0.2062 Epoch 13/20 3/3 [==============================] - 3s 921ms/step - loss: 3.2626 - accuracy: 0.2062 Epoch 14/20 3/3 [==============================] - 3s 935ms/step - loss: 3.2546 - accuracy: 0.2062 Epoch 15/20 3/3 [==============================] - 3s 922ms/step - loss: 3.2617 - accuracy: 0.2062 Epoch 16/20 3/3 [==============================] - 3s 918ms/step - loss: 3.2577 - accuracy: 0.2062 Epoch 17/20 3/3 [==============================] - 3s 922ms/step - loss: 3.2602 - accuracy: 0.2062 Epoch 18/20 3/3 [==============================] - 3s 921ms/step - loss: 3.2617 - accuracy: 0.2062 Epoch 19/20 3/3 [==============================] - 3s 929ms/step - loss: 3.2513 - accuracy: 0.2062 Epoch 20/20 3/3 [==============================] - 3s 919ms/step - loss: 3.2497 - accuracy: 0.2062 ```
I can't get good performance from BERT
CC BY-SA 4.0
null
2023-06-02T23:09:53.687
2023-06-02T23:09:53.687
null
null
110309
[ "keras", "nlp", "word-embeddings", "bert", "recurrent-neural-network" ]
121944
1
null
null
0
7
I am watching a tutorial on using mel spectrograms to classify the audio's genre via CNN. My question is why apply local min-max normalization to each individual mel spectrogram? What I mean by local is that the min and max value is calculated from the individual mel spectrogram and then min-max normalization is applied; thus, you have to get min and max for each mel spectrogram and then apply the min-max normalization based on its own min and max. Why apply this local min-max normalization and not take into consideration of the whole training sets min and max first, then apply the normalization. Also, why not do Standardization (Z-score Normalization)?
Why apply min-max normalization to each individual mel spectrogram for a training set?
CC BY-SA 4.0
null
2023-06-02T23:15:48.627
2023-06-02T23:17:07.103
2023-06-02T23:17:07.103
150455
150455
[ "deep-learning", "classification", "cnn", "normalization" ]
121945
1
null
null
0
2
In the video [Training Latent Dirichlet Allocation: Gibbs Sampling](https://www.youtube.com/watch?v=BaM1uiCpj_E&t=452s), the Goal section at 7:32, the video says that: > Goal: Color each word with blue, green, red Each article is as monochromatic as possible Each word is as monochromatic as possible ![1](https://i.stack.imgur.com/BfTJjm.png) I wonder if this is correct for LDA only, or for all topic models in general?
Is colorring each document and word as monochromatic as possible the goal of LDA in specific or all topic models in general?
CC BY-SA 4.0
null
2023-06-03T04:19:49.010
2023-06-03T04:31:21.090
2023-06-03T04:31:21.090
119882
119882
[ "topic-model", "lda" ]
121946
1
null
null
0
9
Definition: I have conducted research on EEG signal classification, specifically focusing on distinguishing between two different classes using raw EEG signals. Data availability poses a significant challenge in the EEG domain, which necessitates the implementation of data augmentation techniques. In my case, I have applied additive Gaussian noise with zero mean and varying standard deviations (σ∈{0.1,0.01,0.001}) to the raw EEG signals for data augmentation. Additionally, I have considered the magnification factor (m∈{1,2,3}) for the additive noise. By augmenting my training data using different combinations of m and σ, I have observed an improvement in test set accuracy in most cases. Question: Considering the training data as X_train, the augmented data as X_train_aug, and the test data as X_test, I would like to determine if there exists a mathematical relationship between (X_train, X_test) and (X_train_aug, X_test) that can explain the observed improvement. Are there any criteria available for measuring the relationship between these variables that can help elucidate the results? Thanks in advance.
Investigating the Impact of Additive Gaussian Noise on EEG Signal Classification: Analyzing the Relationship between Augmented and Original Data
CC BY-SA 4.0
null
2023-06-03T08:08:25.830
2023-06-03T12:10:35.443
null
null
145713
[ "deep-learning", "time-series", "data-augmentation", "gaussian", "noise" ]
121947
2
null
121909
0
null
Unique IDs (e.g. a match ID, when each record is a match) are unneccessary for either data splitting or model creation, so there's no harm removing them. There may be a benefit in removing them, to help prevent a learning algorithm "detecting" a spurious correlation. For non-unique IDs (e.g. ones you have added to facilitate data source joins), if you include both the ID variable for a source and the attributes from that source, there will be a high degree of correlation between the key and that group of attributes. This could cause problems if you use a learning algorithm that assumes independence between the variables, such as linear/logistic regression. As for keeping the ID variables for data splitting - you haven't included links to the posts you refer to so I'm not sure what point they are trying to make. Just including the ID variables in your data when creating the train/test splits isn't going to achieve anything, so I assume they are discussing using ID variables as part of the splitting strategy. Two possible strategies are stratified splitting (where the proportion of records with each value of the stratification variable is the same in the training and test sets) and group splitting (where records with the same value of the grouping variable are either all in the test set or all in the training set). If you want to use one of these splitting strategies you may want to use one of your ID variables for this. However, I suspect in most cases there would be a "real" attribute you could use instead. This would work just as well and would have the advantage of making it easier to explain, if you later need to explain your splitting strategy.
null
CC BY-SA 4.0
null
2023-06-03T08:42:39.590
2023-06-03T08:42:39.590
null
null
135707
null
121948
2
null
121859
0
null
If you are using Conv1D, you will need a data structure that can be mapped onto a 3D tensor (such as a 3D numpy array). If you have some non-spatiotemporal variables that you want to process using dense layers, you will need these to be in a 2D structure. Say you have n simulations, each with m time steps, then you need to have two data structures. One will be an $n\times 1$ structure that holds the initial conditions. The other will be an $n\times m\times 9$ structure (assuming just the 9 spatiotemporal variables shown in your example) that holds the spatiotemporal data. You can't use the Sequential API when you have multiple inputs, so you'll need to use the [Functional API](https://keras.io/guides/functional_api/). Here's a minimal example of using the functional API to create a model with two inputs: ``` import tensorflow as tf import tensorflow.keras as keras from IPython.display import Image n = 10 # Simulations m = 5 # Time steps v = 9 # Spatiotemporal variables i = 1 # Initial conditions variables # Build the spatiotemporal branch input1 = keras.Input((m, v), name="Conv_Input") x1 = keras.layers.Conv1D(filters=8, kernel_size=3)(input1) # Build the "initial conditions" branch input2 = keras.Input(i, name="Dense_Input") x2 = keras.layers.Dense(units=8)(input2) # Merge the two branches x1 = keras.layers.Flatten()(x1) full = keras.layers.Concatenate()((x1, x2)) full = keras.layers.Dense(units=1, activation='sigmoid')(full) # Build the full model and display the model structure model = keras.Model(inputs=(input1, input2), outputs=full) keras.utils.plot_model(model, show_shapes=True, show_layer_names=True, to_file='model.png') Image('model.png') ``` This code produces the following model: [](https://i.stack.imgur.com/jBSPv.png)
null
CC BY-SA 4.0
null
2023-06-03T10:12:12.507
2023-06-03T10:12:12.507
null
null
135707
null
121949
2
null
121946
0
null
Well, the improvement you observe may be simply attibuted to having more training data, allowing to better exploit the capacity of your model: meaning better accuracy, and wider generalization. Another (or additional) option, is that the added noise regularizes your model making it more robust (and better on test-set.) If you increase the noise too much, you should observe decreasing performance. Also, Gaussian noise have the nice property of being zero-centered (i.e. its mean is zero) so that it cancels-out on average.
null
CC BY-SA 4.0
null
2023-06-03T12:10:35.443
2023-06-03T12:10:35.443
null
null
150390
null
121950
1
null
null
0
9
I'm training an LSTM model. I'm confusing about the validation loss of the model. Which value better represents the validation loss of the model? Is it the last value I obtain in the floowing loop, or I should calculate the mean of all the history values? This is my model ``` for epoch in range(n_epochs): lstm.train() outputs = lstm.forward(X_train) # forward pass optimiser.zero_grad() # calculate the gradient, manually setting to 0 # obtain the loss function loss = loss_fn(outputs, y_train) loss.backward() # calculates the loss of the loss function optimiser.step() # improve from loss, i.e backprop train_loss_history.append(loss.item()) # test loss lstm.eval() test_preds = lstm(X_test) MSEtest_loss = loss_fn(test_preds, y_test) val_loss_history.append(MSEtest_loss.item()) if epoch % 100 == 0: print("Epoch: %d, train loss: %1.5f, val MSE loss: %1.5f " % (epoch, loss.item(), MSEtest_loss.item(), )) ``` Now, is the last value of ` MSEtest_loss.item()` represents the validation loss of the model? or Should I calculate the `val_loss_history`to represent the validation loss of the model ?
How to select the validation loss value in this model to be compared with other models?
CC BY-SA 4.0
null
2023-06-03T13:18:58.793
2023-06-03T13:18:58.793
null
null
150465
[ "lstm", "loss-function", "validation", "mse" ]
121951
2
null
121793
1
null
[Based on this answer's algorithm](https://softwareengineering.stackexchange.com/questions/445842/how-to-remove-the-hotspots-from-given-image-by-using-python-and-opencv/445849#445849) I have written my code which is working as intended. Here's a breakdown of the coding steps: - Otsu's thresholding is applied to the grayscale image1 using cv2.threshold() with the cv2.THRESH_BINARY + cv2.THRESH_OTSU flag to obtain image2. - Erosion is performed on image2 using cv2.erode() with a square kernel to obtain image3. - The threshold distance K is defined. - A circular mask is created using cv2.getStructuringElement() with cv2.MORPH_ELLIPSE shape and dimensions (2 * K, 2 * K). - Nested loops iterate over the pixels of image1. - If the pixel value in image2 is greater than 0 (indicating a hot-spot), the neighborhood around the current pixel in image3 is extracted. - If the sum of pixel values in the neighborhood is greater than 0, indicating the presence of an illuminated pixel in image3 within the K distance, the corresponding pixel in the final image is set to 0 (black). - The final image is displayed using cv2.imshow(). Note that in this implementation, a square region with side 2 * K is used instead of a circular mask. The choice between a circular or square region can be adjusted by modifying the mask variable in the code. Feel free to adjust the threshold distance K and experiment with different shapes and sizes of the neighborhood mask to suit your specific requirements. ``` import cv2 import numpy as np # Load the image image1 = cv2.imread('orange.jpg', cv2.IMREAD_GRAYSCALE) original_image = image1 # Otsu's thresholding _, image2 = cv2.threshold(image1, 0, 255, cv2.THRESH_BINARY + cv2.THRESH_OTSU) # Erosion kernel = np.ones((5, 5), np.uint8) image3 = cv2.erode(image2, kernel, iterations=1) # Define the threshold distance K K = 2 # Create the circular mask mask = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (2 * K, 2 * K)) # Iterate over image1 pixels and generate the final image final_image = np.copy(image1) for y in range(image1.shape[0]): for x in range(image1.shape[1]): if image2[y, x] > 0: # Check if any illuminated pixel exists within K distance in image3 neighborhood = image3[max(y - K, 0):min(y + K + 1, image3.shape[0]), max(x - K, 0):min(x + K + 1, image3.shape[1])] if np.sum(neighborhood) > 0: final_image[y, x] = 0 # Display the original and final image cv2.imshow('Original', original_image) cv2.imshow('Final Image', final_image) cv2.waitKey(0) cv2.destroyAllWindows() ```
null
CC BY-SA 4.0
null
2023-06-03T14:06:33.883
2023-06-03T14:13:59.957
2023-06-03T14:13:59.957
150257
150257
null
121952
1
null
null
0
11
I'm using Python and I want to compare the output of two machine-translation (ish) systems. Most of the tools seem to be focused on sentence-by-sentence evaluation. Either I get memory blow-ups with pyter, or weird results with sacrebleu. Fundamentally I just want: `$ calculate_ter file1.txt reference.txt ` Or `>>> calculate_ter(file1_str, reference_str)` I do not really care specifically about sentence boundaries or line boundaries. I just want to know how different these two texts are.
How to evaluate machine translations of long documents?
CC BY-SA 4.0
null
2023-06-03T18:15:38.003
2023-06-03T21:39:50.570
2023-06-03T18:31:49.373
150472
150472
[ "nlp", "model-evaluations", "machine-translation" ]
121953
2
null
121952
0
null
Partial answer: I think I can implement `calculate_ter` like this: ``` def evaluate_ter(hypothesis, reference): """ Evaluate TER between hypothesis and reference. """ ter_score = TER().corpus_score([hypothesis], [[reference]]) return ter_score ``` But it does not scale up to documents of multi-megabytes. Is there a better implementation which does?
null
CC BY-SA 4.0
null
2023-06-03T19:02:07.257
2023-06-03T21:39:50.570
2023-06-03T21:39:50.570
150472
150472
null
121954
1
null
null
0
9
I have a data-set where I want to predict attendance of members. I need to choose between applying regression, classification, and clustering. I'm unsure between regression and classification. I'm ruling out clustering (please let me know if I should not). A rough overview of the dataset: The dataset contains: - attendance column comprising of 0 and 1. - category: contains the activities members signed up for like sports, games, etc.)` - days_before: Number of days before members signed up for the activity. - time: Time of the event (of a specific category): AM or PM - weight: Weight of member - months_of_membership: Number of months of membership for a given member. I'm thinking to apply binomial regression. For example, this could be one model: `attendance` ~ `category` + `days_before` + `time` + `months_of_membership` + `weight`. However, I see that I can also apply classification. For example, I can create a decision tree to predict new sign-ups and classify them on attendance. I want to know: - What am I missing? How do I decide which ML Technique to apply? - Is there a cheat-sheet that I can look up to understand when to apply what technique.
Understanding which machine learning technique should I be using
CC BY-SA 4.0
null
2023-06-03T19:03:48.977
2023-06-03T20:03:44.633
null
null
150473
[ "machine-learning", "machine-learning-model" ]
121955
2
null
121954
0
null
I think that you should apply classification, given that the target variable `attendance` is binary with values of `0` and `1`. Among the three machine learning tasks you mentioned: classification, regression, and clustering, there exists a simple rule that tells you when you should apply each one. If the target variable that you are trying to predict is categorical, i.e., it can take up a pre-defined finite set of possible values (as in your case, attendance can be either 0 or 1, it can't be 0.5 or 15, or red) then you should apply classification. Else, if the target variable that you are trying to predict is numerical, i.e., it can take up a (theoretically) infinite set of possible values; example: predicting house prices, then you should use regression. Else, if the target variable is not available, in other words, it doesn't exist, then you use clustering. Clustering will group your data by similarity and is a bit different concept of machine learning from the previous two.
null
CC BY-SA 4.0
null
2023-06-03T20:03:44.633
2023-06-03T20:03:44.633
null
null
142205
null
121956
2
null
117269
0
null
From my understanding, mAP@0.5 is the average precision at an IOU threshold, not confidence threshold. IOU (intersection over union) defines how to determine true/false positive detections, while confidence determines what the actual detection is.
null
CC BY-SA 4.0
null
2023-06-03T20:51:03.847
2023-06-03T20:51:03.847
null
null
150475
null
121957
1
null
null
0
9
My task is creating a model for QA-purposes. I have only ~200 samples on a specific domain of questions. Using a pretrained like DeBERTa without any further changes results in f1 scores of ~35%. To further improve this, I tried to train the model on my additional data. This very quickly results in a non-usable model with scores of 1-3%. Freezing all / most layers but the last just delays the decrease. I also tried opendelta's BitFit, Lora or Adapter models which change the original model in a small way and freeze all other parameters. This unfortunately also didn't help the performance. Any ideas on what else I could try or do?
Fine-Tuning / Transfer learning results in worse performance
CC BY-SA 4.0
null
2023-06-03T22:05:44.407
2023-06-03T22:05:44.407
null
null
150447
[ "machine-learning", "transfer-learning", "finetuning", "domain-adaptation" ]