Id
stringlengths 2
6
| PostTypeId
stringclasses 1
value | AcceptedAnswerId
stringlengths 2
6
| ParentId
stringclasses 0
values | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
| Body
stringlengths 34
27.1k
| Title
stringlengths 15
150
| ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 1
value | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 2
6
⌀ | OwnerUserId
stringlengths 2
6
⌀ | Tags
listlengths 1
5
| Answer
stringlengths 32
27.2k
| SimilarQuestion
stringlengths 15
150
| SimilarQuestionAnswer
stringlengths 44
22.3k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
114383
|
1
|
114392
| null |
1
|
315
|
Every example I come across any kind of iterative learning on Random Forest/XGBoost/LightGBM, it just continuously grows the number of estimators for new batches of data by `n_tree`/`n_estimators`/`num_boost_rounds` every time `.fit()` gets applied [...]. Most of them seem to rely on iterative learning for training on very large datasets that can't be loaded into memory at once.
However, I want to implement a continuous learning pipeline (with LightGBM; Python) that takes newly available data on a daily basis in order to update an existing model (without the need to retrain on the whole [growing] dataset; stateful). The initially mentioned approach would imply that my model will ever increase its tree count.
Is it possible to train tree-based algorithms so that the estimators (split thresholds) themselves get updated/adjusted in contrast to only adding estimators?
|
Online Learning/Continual Learning for tree-based Algorithms
|
CC BY-SA 4.0
| null |
2022-09-13T10:15:54.677
|
2022-09-13T13:44:49.520
|
2022-09-13T11:26:00.360
|
126530
|
126530
|
[
"machine-learning",
"decision-trees",
"xgboost",
"lightgbm",
"online-learning"
] |
This is a really good question for which I will give you a theoretical result; in particular, I am not aware of any specific implementation in any programming language.
The concept of incremental learning with decision trees started in 1986 to enhance the ID3 learning algorithm to learn continually/incrementally (recall that ID3 deals only with categorical input features/variables); the resulting procedure is called [ID4](https://www.aaai.org/Papers/AAAI/1986/AAAI86-083.pdf).
Some years later, Utgoff et al. proposed other two approaches ([ID5](https://people.cs.umass.edu/%7Eutgoff/papers/mlj-id5r.pdf) and [ITI](https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.4.786&rep=rep1&type=pdf)) to overcome some of the shortcomings of the ID4 approach.
There is also a nice Wikipedia [page](https://en.wikipedia.org/wiki/Incremental_decision_tree) that you might read.
EDIT
There are also works on [online boosting and bagging](http://proceedings.mlr.press/r3/oza01a/oza01a.pdf), which I was not aware of either, but your question thought me something new; thanks for that (+1)!
|
Distinguish Multi-Task vs Single-incremental Task in Continual Learning
|
TL;DR/Summary: The classes ($y1$, $y2$, $y3$ below) in multi-task can be anything (it may be that $y1 \cap y2 = \emptyset$, $y1 \cap y2 = \emptyset$, and so on). In Incremental we take the labels (and data from a common set, (i.e. $y[:2] \subseteq y[:4] \subseteq y$, by the definition of subsetting)
It is just a question of the interpretation of the definition. The definitions look very similar to each other but the devil's in the details.
Assuming that the model can distinguish the upper bond number of classes: For example, it is an ANN with $N$ neurons in the output layer and the number of classes ($k$) in the task with most classes is $k <= N$ (multi-task) or the total number of classes ($k$) is also $k <= N$ (single-incremental-task); we can say that:
### Multi Task
Here we will train the model on different tasks over time, this is often called as reinforcement learning. In semi-python pseudo code (where `.train` already includes things like cross validation):
```
model = Whatever(...)
X1 = [[1, 0],
[2, 2],
[3, 0]]
y1 = [0, 1, 0]
model.train(X1, y1)
X2 = [[4, 4],
[5, 5],
[6, 0]]
y2 = [1, 2, 0]
model.train(X2, y2)
X3 = [[7, 0],
[8, 8],
[9, 9]]
y3 = [0, 1, 1]
model.train(X3, y3)
score = model.score(X3, y3)
```
Here the tasks may or may not be related. Or often are slightly related (e.g. identifying different types of objects in each training).
### Single Incremental Task
This is also training the model several times, here we have a single task in `X` but do not feed the entire dataset at once. In semi-python pseudo code:
```
model = Whatever(...)
X = [[1, 0],
[2, 2],
[3, 0],
[4, 4],
[5, 5],
[6, 0]]
y = [0, 1, 0, 3, 2, 0]
model.train(X[:2, :], y[:2])
score1 = model.score(X[:2, :], y[:2])
model.train(X[:4, :], y[:4])
score2 = model.score(X[:4, :], y[:4])
model.train(X, y)
score3 = model.score(X, y)
```
Here the task is one but it may be a big one. One place where this technique is used is to build a [learning curve](https://scikit-learn.org/stable/modules/learning_curve.html#learning-curve), which is one way of evaluating if we have enough data to understand the variation of the task.
---
Extra note: in the multitask case we said that $y1 \cap y2 \cap y3 = \emptyset$ could be (and most likely is) possible. One example would be: $y1$ are different models of cars and $y2$ are different models of ships. And the question is: Do understanding different models of cars help with differentiation different models of ships?
(P.S. `y`s will always be enumerated from 0 up to the number of classes, i.e. the numeric values of `y` will always be the same but their class meaning does not need to be).
|
114415
|
1
|
114416
| null |
8
|
490
|
I have customer demographic data that include columns like: age, the first half of the postcode, occupation (there is a defined list of possible occupations), and more. Each month I get a new batch of 1000 rows of this type of data (which is not labelled) and I need to put this into my trained model to predict what item (out of 5 items) each person in the new batch data set is most likely going to buy (a multiclass classification problem).
Each time I receive this data, I compare the summary statistics between the old and new data, and investigate any changes in the distribution of the categorical variables using hypothesis testing. If my tests show that my new batch of data had vastly different summary stats, or distributions to my training set e.g.
The new batch targeted people under 25 only, whereas my training set contains all age groups.
The new batch targeted people from a specific area of the UK, whereas my training set contains all possible locations in the UK.
Would I need to:
Make any changes to my training set, or my overall workflow, to adjust for this?
As far as I know, this is data drift. Am I correct in saying that?
If the batch data coming in was labelled, so we knew what items these people bought, and there was a sizable difference in the proportion of each product sold, what could I do to quantify this instead of naively adding this new data to the training set and retraining my model?
Thanks
|
How to Combat Data Drift
|
CC BY-SA 4.0
| null |
2022-09-14T11:16:13.590
|
2022-09-14T12:49:44.130
| null | null |
140415
|
[
"classification",
"feature-engineering",
"data-science-model",
"mlops",
"data-drift"
] |
As you suggest, that situation could end up your monitoring system indicating a data drift. To evaluate this scenario, let's classify some types of data drift we could have:
- features drift: given when the distribution of the input features (comparing training datasets VS prediction datasets) change enough (with a defined threshold) to raise an alert
- target drift: distribution of the label values change when comparing training VS prediction distributions
- concept drit: when the relation between the input features and target values change; it can arise when the label is redefined (for instance, the business rules for deciding clients who are active or inactive with some products; if the labeling rules changes, the same input feature values could be assigned to different target values before VS after redefining).
The way to monitor these drifts can be carried out via hypothesis testing as you say (e.g. [Kolmogor-Smirnov test](https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html), [Population Stability Index](https://www.listendata.com/2015/05/population-stability-index.html), etc), where you define the degree of warning thresholds.
The point is: what is the goal of having this drift monitoring? In general, the advice is to retraining your models when this type of drifts occur: it might or might not improve your model performance, but you make sure to have a model updated with fresh data. Other goal is also to have knowledge about your update data statistics of course.
Nevertheless, in this scenario of a subset of clients ages, although the model was trained with more "complete" datasets, you are making inference on a subset of the whole population used for training the model, so your model could still be valid enough (unless this new sceneario becomes the ususal one, so a more custom model could be trained with this new kind of more specific data).
|
What techniques are used to analyze data drift?
|
It depends about what type of data are we talking: tabular, image, text...
This is part of my PhD, so I am completely biased, I will suggest Explanation Shift. (I would love some feedback). It works well on tabular data.
- Package: skshift https://skshift.readthedocs.io/
- Paper: https://arxiv.org/pdf/2303.08081.pdf
In the related work section one can find other approaches.
The main idea under "Explanation Shift" is seeing how does distribution shift impact the model behaviour. By this we compare how the explanations (Shapley values) look on test set and on the supposed Out-Of-Distribution data.
The issue is that in the absence of the label of OOD data, (y_ood) one can not estimate the performance of the model. There is the need to either provide some samples of y_ood, or to characterize the type of shift. Since you can't calculate peformance metrics the second best is to understand how the model has changed.
There is a well known library Alibi [https://github.com/SeldonIO/alibi-detect](https://github.com/SeldonIO/alibi-detect)
That has other methods :)
|
114424
|
1
|
114426
| null |
1
|
33
|
I am trying to train a CNN, using the MNIST dataset (which I perform data augmentation on), to classify numbers on a sudoku grid from 0-9.
While mostly successful, my network seems to get confused between 3s and 8s, and 1s and 7s because of how similar they look. This is unacceptable, however, since incorrect classification will make solving the sudoku problem impossible.
I am using the ResNet50 pre-trained model as my convolutional base.
Is there any way to more harshly penalise mis-classification of 3s and 8s, or 1s and 7s during the training process? I did find a link which seems to propose a solution - [https://github.com/keras-team/keras/issues/2115](https://github.com/keras-team/keras/issues/2115) - but I am quite new to TensorFlow/Keras, and don't quite understand the code given.
I would very much appreciate if anyone could either explain the proposed solution on the link above, or suggest improvements to my network/code (which I have added below).
THE NETWORK:
```
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
(train_images, train_labels), (test_images, test_labels) = tf.keras.datasets.mnist.load_data()
# To make the model work with grayscale images, we need to make them APPEAR to be RGB. The easiest way is to repeat the image array 3 times on a new dimension.
# Because we will have the same image over all 3 channels, the performance of the model should be the same as it was on RGB images.
train_images, test_images = np.repeat(train_images[..., np.newaxis], 3, -1), np.repeat(test_images[...,np.newaxis], 3, -1)
# Normalize pixel values to be between 0 and 1
train_images, test_images = train_images / 255.0, test_images / 255.0
# Resize the input shape because ResNet50 can take the input image having height, width as multiples of 32 and 3 as channel width
train_images, test_images = tf.image.resize(train_images, [32,32]), tf.image.resize(test_images, [32,32])
assert train_images.shape == (60000, 32, 32, 3) # All images are 28x28 - no resizing needed
assert test_images.shape == (10000, 32, 32, 3)
assert train_labels.shape == (60000,) # Labels - numbers from 0 to 9
assert test_labels.shape == (10000,) # Labels - numbers from 0 to
# Data augmentation parameters
rotation_range_val = 10 # rotation
width_shift_val = 0.1 # horizontal shift
height_shift_val = 0.1 # vertical shift
zoom_range_val=[0.8,1.3] # zoom
train_datagen = tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range = rotation_range_val,
width_shift_range = width_shift_val,
height_shift_range = height_shift_val,
zoom_range = zoom_range_val,
)
train_datagen.fit(train_images)
# Pretrained convolutional base
base_model = tf.keras.applications.ResNet50(input_shape = (32,32,3), include_top=False, weights='imagenet')
# Freezing the base
base_model.trainable = False
# Adding dense layers on top
classifier_model = tf.keras.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation='relu'), # hidden layer
tf.keras.layers.Dense(10) # output layer
])
# Combine base and classifier
model = tf.keras.Sequential([
base_model,
classifier_model
])
# Compile the model
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
# Fits the model on batches with real-time data augmentation
history = model.fit(train_datagen.flow(train_images, train_labels, batch_size=64), epochs=20, verbose=2) # If unspecified, batch size defaults to 32.
plt.plot(history.history['accuracy'], label='accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 1])
plt.legend(loc='lower right')
plt.show()
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
model.save('C:\Programming 2022-23\OpenCV\Sudoku Solver\ResNet50_model_with_augmentation')
```
HOW I AM USING THE NETWORK TO CLASSIFY:
```
import cv2 as cv
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
import tensorflow_datasets as tfds
tfds.disable_progress_bar()
localised_grid = np.load("C:\Programming 2022-23\OpenCV\Sudoku Solver\Localised sudoku grid.npy")
# Visualising the gridlines
output_visualise = np.copy(localised_grid)
for i in range (0,500,50):
cv.line(output_visualise, (i,0), (i,450),(255,0,0), thickness = 3) # Drawing vertical lines
cv.line(output_visualise, (0,i), (450,i),(255,0,0), thickness = 3) # Drawing horizontal lines
#cv.imshow("Gridlines", output_visualise)
# MNIST dataset contains black and white images, so we threshold the images
localised_grid_gray = cv.cvtColor(localised_grid, cv.COLOR_BGR2GRAY)
adaptive_thresh = cv.adaptiveThreshold(localised_grid_gray, 255, cv.ADAPTIVE_THRESH_GAUSSIAN_C, cv.THRESH_BINARY_INV, 11, 2)
for i in range (0,2):
adaptive_thresh = cv.medianBlur(adaptive_thresh, 3)
cv.imshow("Thresholded", adaptive_thresh)
sudoku_grid_stored = np.empty((9,9,46,46)).astype("uint8")
for y in range(0,9):
for x in range(0,9):
cropped_digit = adaptive_thresh[50*y+2:(50*y)+48,50*x+2:(50*x)+48].astype("uint8") # Crop slightly to remove any sudoku grid outlines that may exist
sudoku_grid_stored[y][x] = cropped_digit
# cv.imshow("7", sudoku_grid_stored[2][0]) # 7 lies in 3rd row, 1st column (zero indexed)
# Inputs to CNN require must have three colour channels
sudoku_grid_stored = np.repeat(sudoku_grid_stored[..., np.newaxis], 3, -1)
# Inputs to CNN must be normalised between 0 and 1, since this is how it was trained
sudoku_grid_stored = sudoku_grid_stored/255.0
# Inputs to CNN must be 32x32, since this is how it was trained
sudoku_grid_resized= np.empty((9,9,32,32,3))
for j in range(0,9):
for i in range (0,9):
sudoku_grid_resized[j][i] = cv.resize(sudoku_grid_stored[j][i], (32,32))
# Load trained CNN
CNN_model = tf.keras.models.load_model('C:\Programming 2022-23\OpenCV\Sudoku Solver\ResNet50_model_with_augmentation')
test = sudoku_grid_resized[0][0]
test = np.expand_dims(test,axis=0)
prediction = np.argmax(CNN_model.predict(test))
print(prediction)
```
RESULT OF THRESHOLDING (I.E. INPUTS TO CNN) - for example, the top left value of 3 is being read as 8.
[](https://i.stack.imgur.com/lX3J8.png)
|
How to stop my CNN getting confused between 3s and 8s, and 1s and 7s?
|
CC BY-SA 4.0
|
0
|
2022-09-14T18:02:46.603
|
2022-09-14T20:26:31.000
| null | null |
140437
|
[
"classification",
"keras",
"tensorflow",
"convolutional-neural-network",
"mnist"
] |
>
Is there any way to more harshly penalise mis-classification of 3s and
8s, or 1s and 7s during the training process?
Yes by setting [weights](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/losses/weighted_sparse_categorical_crossentropy.py) for the loss function but this would likely lead to lower scores in the other numbers.
Firstly I note you have just train and test. Standard practise is to do the split as train validation and test. (usually by splitting the training into train and validation) Then you can perform cross validation.
Additionally you should stratify (have same representation of classes) in each dataset.
As to your main question, several Approaches:
[More augmentation](https://albumentations.ai/docs/getting_started/image_augmentation/) including elastic distortion, brightness and cutouts
[Cross Validation](https://scikit-learn.org/stable/modules/cross_validation.html)
Use earlystopping into the method (potentially your model is overfitting the training data)
Ensemble more neural networks
Data-exploration - see which images are failing in the validation and look for patterns
[Test time augmentation](https://machinelearningmastery.com/how-to-use-test-time-augmentation-to-improve-model-performance-for-image-classification/) Perform augmentation on your validation (and test) images and take an average.
|
Problem with CNN
|
Default target size in `flow_from_directory` is 256 * 256 (height * width). So your data is resized to a dimension 256 * 256 while reading and you specified `input_shape=(700, 460,3)` in the layer
```
ImageDataGenerator.flow_from_directory(
directory,
**target_size=(256, 256)**,
color_mode="rgb",
classes=None,
class_mode="categorical",
batch_size=32,
shuffle=True,
seed=None,
save_to_dir=None,
save_prefix="",
save_format="png",
follow_links=False,
subset=None,
interpolation="nearest",
)
```
|
114455
|
1
|
114486
| null |
7
|
2278
|
I want to do a data science project. I want to use price history to predict future prices.
I want to use `correlation(y, y_pred)` as my loss function but I found it's hard to calculate first deter, and second deter.
Has anyone used correlation as loss function, and is it good?
|
Is Pearson correlation a good loss function?
|
CC BY-SA 4.0
| null |
2022-09-16T02:23:38.123
|
2023-03-23T20:45:01.830
|
2022-09-17T16:00:54.833
|
25180
|
140484
|
[
"machine-learning",
"loss-function"
] |
I think Dave's answer points out the most pressing issues:
- translational invariance
- Absolute scale invariance
In Tensorflow we can define our correlation function:
```
class CorrLoss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
res_true = y_true - tf.reduce_mean(y_true)
res_pred = y_pred - tf.reduce_mean(y_pred)
cov = tf.reduce_mean(res_true * res_pred)
var_true = tf.reduce_mean(res_true**2)
var_pred = tf.reduce_mean(res_pred**2)
sigma_true = tf.sqrt(var_true)
sigma_pred = tf.sqrt(var_pred)
return - cov / (sigma_true * sigma_pred)
```
And quickly whip up a simple linear model:
```
model = tf.keras.Sequential([
layers.Dense(input_shape=[1,], units=1)
])
```
And a data set that is learnable by this model:
```
x = tf.random.normal((1000,))
y = 5 * x + 10 + tf.random.normal((1000,))
```
Training with our choice of loss function, model, and data, we can visually understand that correlation alone is not sufficient. As Dave describes, least squares is often effective.
[](https://i.stack.imgur.com/7Aqx4.png)
Mostly for my own amusement, I considered if maximizing $\mathbb{E}[Y \hat Y]$ would fare any better than maximizing Pearson's correlation.
Here is the custom loss function:
```
class ProdLoss(tf.keras.losses.Loss):
def call(self, y_true, y_pred):
return -tf.reduce_mean(y_true * y_pred)
```
The following is a close success over the consistently-horrible choice of correlation:
[](https://i.stack.imgur.com/PHzu9.png)
And often it would look better, but it wasn't reliable! It would also often look like this:
[](https://i.stack.imgur.com/RcHeE.png)
Interestingly, the product moment will tend to ignore the true values by making the predicted values extreme. I noticed this by taking the same problem and increasing the number of epochs to $10^4$.
[](https://i.stack.imgur.com/HV7sz.png)
Thus the correlation and mixed moment are unreliable loss functions for achieving $Y \approx \hat Y$.
|
Lower loss always better for Probabilistic loss functions?
|
If $0.5$ is the threshold for declaring a class (perhaps more sensible in a binary classification than your problem, yes), there is no incentive for accuracy to regard a $1$ as a $0.95$ instead of a $0.51$.
Meanwhile your cross-entropy loss function sees that the correct answer is $1$ and wants to get the probability as close to $1$ as it can. Accuracy, however, doesn't care if the predicted probability is $0.51$ or $0.95$, so accuracy does not change as you move the predicted probability closer and closer to the observed value, even though the loss function decreases by getting closer and closer to the observation (as you would expect loss to do...consider how square loss behaves in a linear regression).
|
114479
|
1
|
114482
| null |
0
|
29
|
When discussing big data, it is sometimes mentioned that data modeling can be done by using a tool like map reduce, while data processing may be performed by apache spark. What is the difference between data modeling tasks, and data processing tasks? Thanks in advance
|
What is the difference between Data Modeling and Data Processing?
|
CC BY-SA 4.0
| null |
2022-09-17T04:59:43.763
|
2022-09-17T14:19:00.993
| null | null |
134529
|
[
"data-analysis",
"apache-spark",
"apache-hadoop"
] |
- Data modeling means representing the data, usually with a somewhat compact model. Modeling implies simplifications: this can lead to a good model which reliably represents the patterns in the data or a terrible model which simplifies too much or not enough.
- Data processing is applying any kind of process to the data.
For the record, map-reduce is relevant only as a technique for processing large data efficiently.
|
Processing data in the right manner in data science
|
The goals of all these methodological guidelines is to avoid [data leakage](https://en.wikipedia.org/wiki/Leakage_(machine_learning)).
Example: let's imagine we want to classify short messages (e.g. tweets). When inspecting the data we find various kinds of smileys: `:-)`, `:|`, `:-/`... At preprocessing stage we replace all smileys found in the data with a special token like `<smiley>` (or something more specific).
- If the detection/replacement is done on the whole data, every occurrence of a smiley in the test set is replaced with <smiley>.
- If the detection/replacement is done on the training set, even after preprocessing there might be a few smileys left in the test, because some uncommon ones didn't appear in the training set.
In the first case there is data leakage: we fixed some issues in the test set even though this wouldn't have been possible with actual fresh data (here the variants of smiley that were not seen in the training set). In the second case the test set is "imperfect", i.e. it's exactly as if it was made of "fresh" unseen data, therefore the evaluation will be more realistic.
This example shows why it's always safer to separate the data first, design the preprocessing steps on the training data, then apply exactly the exact same preprocessing steps to the test data.
In practice there can be cases where it's more convenient to apply some general preprocessing to the whole data. The decision depends on the task and the data: sometimes the risk of data leakage is so small that it can be neglected. However it's crucial to keep in mind that even the design of the preprocessing can be a source of data leakage.
|
114507
|
1
|
114563
| null |
0
|
108
|
In a machine learning model one of the features is unemployment:
```
Month Unemployment
May-2022 3.6%
Jun-2022 3.7%
Jul-2022 3.8%
Aug-2022 3.9%
```
What I need is to use as an additional feature the trend in the last three months, in this case it went up 0.1% each month, so the trend would be 0.3%? Note that I'm not looking to calculate the moving average.
|
How to calculate a trend to use as a feature in a machine learning model?
|
CC BY-SA 4.0
| null |
2022-09-18T21:00:10.573
|
2022-09-20T17:29:54.873
| null | null |
76801
|
[
"machine-learning",
"machine-learning-model"
] |
You have several choices:
- The trend can be calculated as $\frac{last - first}{first}$, e.g. $\frac{3.9 - 3.6}{3.6}$
- You can perform a linear regression including the 4 points and use the slope as trend.
- Or any variant, e.g. average difference: $\frac{(3.7-3.6)+(3.8-3.7)+(3.9-3.8)}{3}$.
|
Which method to use to remove trend from time series?
|
Detrend does a least squares fit (linear or constant) and subtracts this from your data points. You can look this up in the [docs](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.detrend.html).
Simply taking the difference between consecutive data points will in general lead to other results.
In general the regression based detrending seems to be more reasonable. You could also think about using [random sample consensus (RANSAC)](https://en.wikipedia.org/wiki/Random_sample_consensus) to be more robust to outliers.
|
114534
|
1
|
114535
| null |
1
|
46
|
I am training a CNN model with about 20.000 images with two classes each 10.000 images. The size of the images vary between 50*50 pixel and 1000x500 pixels. I am resizing all images to the average size of all images, which is 350x150 pixels. Then training a CNN with this architecture:
```
import cv2
import os
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import Flatten
from keras.constraints import maxnorm
from keras.optimizers import SGD
from keras.layers.convolutional import Convolution2D
from keras.layers.convolutional import MaxPooling2D
from keras.utils import np_utils
from sklearn.model_selection import train_test_split
import random
import matplotlib.pyplot as plt
data = []
labels = []
imagePaths = sorted(list(my_images))
random.seed(42)
random.shuffle(imagePaths)
# loop over the images
for imagePath in imagePaths:
image = cv2.imread(imagePath)
image = cv2.resize(image, (350, 150))
data.append(image)
# extract the class label from the image path and update the
# labels list
label = imagePath.split(os.path.sep)[-2].split('/')[-1]
if label == 'pos':
label = 1
elif label == 'neg':
label = 0
labels.append(label)
# scale the raw pixel intensities to the range [0, 1]
data = np.array(data, dtype="float") / 255.0
labels = np.array(labels)
# partition the data into training and testing splits using 75% of
# the data for training and the remaining 25% for testing
(trainX, testX, trainY, testY) = train_test_split(data, labels, test_size=0.25, random_state=42)
unique, counts = np.unique(trainY, return_counts=True)
print(dict(zip(unique, counts)))
y_train = np_utils.to_categorical(trainY)
y_test = np_utils.to_categorical(testY)
num_classes = 2
# # # Create the model
model = Sequential()
model.add(Convolution2D(32, 3, 3, input_shape=(150, 350, 3), activation='relu', border_mode='same'))
model.add(Dropout(0.2))
model.add(Convolution2D(32, 3, 3, activation='relu', border_mode='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same'))
model.add(Dropout(0.2))
model.add(Convolution2D(64, 3, 3, activation='relu', border_mode='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Flatten())
model.add(Dropout(0.2))
model.add(Dense(1024, activation='relu', W_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(512, activation='relu', W_constraint=maxnorm(3)))
model.add(Dropout(0.2))
model.add(Dense(num_classes, activation='softmax'))
# Compile model
epochs = 25
lrate = 0.01
decay = lrate / epochs
sgd = SGD(lr=lrate, momentum=0.9, decay=decay, nesterov=False)
model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics=['accuracy'])
```
I am getting an accuracy of 95 % which is really good and I am using it as production model. However I am wondering whether I can improve the accuracy since the number of images seems to be very high and the classification problem separable:
[example images](https://drive.google.com/file/d/1veBOUord7ikjP70nxsoU8UK689JdgBYP/view?usp=sharing)
Is there any chance to improve the model and to squeeze out a bit more from the prediction?
|
Improve CNN classification accuracy
|
CC BY-SA 4.0
| null |
2022-09-20T05:58:04.213
|
2022-09-20T07:20:20.503
| null | null |
140617
|
[
"python",
"keras",
"tensorflow",
"cnn",
"image-classification"
] |
95% is very good, I'm not sure if improving that result would not alter the result in production: Keeping an error margin might be helpful to avoid overfitting, but it may not be your case.
Nevertheless, here are some tips to improve your model even more:
- Apply the AdamW algorithm instead of SGD. AdamW is an optimizer that reduces the learning rate progressively with iterations. I've already improved models by 20% using this optimizer.
[https://www.fast.ai/posts/2018-07-02-adam-weight-decay.html](https://www.fast.ai/posts/2018-07-02-adam-weight-decay.html)
- Fine-tune your hyperparameters & structure thanks to a genetic algorithm. This solution requires a lot of patience, as it explores many different model configurations, but you will eventually reach better results. Therefore, you could rent a powerful GPU in a cloud for a few hours to do this task (=few dollars).
[https://sainivedh.medium.com/optimization-of-cnn-architecture-using-genetic-algorithm-for-image-classification-5c48f25dac9c](https://sainivedh.medium.com/optimization-of-cnn-architecture-using-genetic-algorithm-for-image-classification-5c48f25dac9c)
|
Improving the results of CNN
|
This is a bit strange... one problem may be that you do not have too many training samples. Do you use a pretrained model? If not, using a pretrained model can potentially improve classification accuracy (especially with limited training samples).
[https://keras.io/applications/](https://keras.io/applications/)
-Edit- This is a good sample code: [https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb](https://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb)
Adjusted for multilass:
```
import keras
from keras.applications import VGG16
conv_base = VGG16(weights='imagenet',
include_top=False,
input_shape=(150, 150, 3))
import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator
base_dir = 'C:/kerasimages'
train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'val')
test_dir = os.path.join(base_dir, 'test')
datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20
def extract_features(directory, sample_count):
features = np.zeros(shape=(sample_count, 4, 4, 512))
labels = np.zeros(shape=(sample_count))
generator = datagen.flow_from_directory(
directory,
target_size=(150, 150),
batch_size=batch_size,
class_mode='binary')
i = 0
for inputs_batch, labels_batch in generator:
features_batch = conv_base.predict(inputs_batch)
features[i * batch_size : (i + 1) * batch_size] = features_batch
labels[i * batch_size : (i + 1) * batch_size] = labels_batch
i += 1
if i * batch_size >= sample_count:
# Note that since generators yield data indefinitely in a loop,
# we must `break` after every image has been seen once.
break
return features, labels
train_features, train_labels = extract_features(train_dir, 2000)
validation_features, validation_labels = extract_features(validation_dir, 1000)
test_features, test_labels = extract_features(test_dir, 1000)
from keras.utils import to_categorical
print(train_labels)
print(train_labels.shape)
train_labels = to_categorical(train_labels)
print(train_labels)
print(train_labels.shape)
validation_labels = to_categorical(validation_labels)
test_labels = to_categorical(test_labels)
train_features = np.reshape(train_features, (2000, 4 * 4 * 512))
validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512))
test_features = np.reshape(test_features, (1000, 4 * 4 * 512))
from keras import models
from keras import layers
from keras import optimizers
model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
# NUMBER OF CLASSES
model.add(layers.Dense(3, activation='softmax'))
model.summary()
conv_base.trainable = False
from keras.preprocessing.image import ImageDataGenerator
train_datagen = ImageDataGenerator(
rescale=1./255,
rotation_range=40,
width_shift_range=0.2,
height_shift_range=0.2,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True,
fill_mode='nearest')
# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
# This is the target directory
train_dir,
# All images will be resized to 150x150
target_size=(150, 150),
batch_size=20,
# Since we use categorical_crossentropy loss, we need binary labels
class_mode='categorical')
validation_generator = test_datagen.flow_from_directory(
validation_dir,
target_size=(150, 150),
batch_size=20,
class_mode='categorical')
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=2e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=30,
validation_data=validation_generator,
validation_steps=50,
verbose=2)
#######################################
# Fine tuning
#conv_base.summary()
conv_base.trainable = True
set_trainable = False
for layer in conv_base.layers:
if layer.name == 'block5_conv1':
set_trainable = True
if set_trainable:
layer.trainable = True
else:
layer.trainable = False
model.compile(loss='categorical_crossentropy',
optimizer=optimizers.RMSprop(lr=1e-5),
metrics=['acc'])
history = model.fit_generator(
train_generator,
steps_per_epoch=100,
epochs=100,
validation_data=validation_generator,
validation_steps=50)
model.save('my_model_multiclass.hdf5')
```
|
114582
|
1
|
114583
| null |
0
|
30
|
Sorry for maybe a stupid question, but I can't seem to find any explanation of it online. If supervised machine learning only works on labeled datasets - you can't use it to predict a value of unlabelled data after the model is already trained?
And if that is true, how could you possibly use those models in real life scenarios? For example you write and train a classifier to predict the age group of the user, can you in any way use that created model for actual prediction of an unlabelled entry?
And if not, what is the point of building this kind of model?
Thank you!
|
I am struggling to understand the point of supervised ML models in real world scenarios
|
CC BY-SA 4.0
| null |
2022-09-21T13:07:29.897
|
2022-09-21T13:22:19.630
| null | null |
140679
|
[
"machine-learning",
"supervised-learning"
] |
Supervised means that the training stage is supervised and requires labels. It does not mean that you need labels during inference.
Here is small example using a Random Forest classifier with scikit learn ([source](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html)):
```
>>> from sklearn.ensemble import RandomForestClassifier
>>> from sklearn.datasets import make_classification
>>> X, y = make_classification(n_samples=1000, n_features=4,
... n_informative=2, n_redundant=0,
... random_state=0, shuffle=False)
>>> clf = RandomForestClassifier(max_depth=2, random_state=0)
>>> clf.fit(X, y)
RandomForestClassifier(...)
>>> print(clf.predict([[0, 0, 0, 0]]))
[1]
```
As you can see, training the model takes in labels (`clf.fit(X, y)` where `y` are the labels) but during inference the model runs the prediction for an unseen datapoint with no label (`print(clf.predict([[0, 0, 0, 0]]))` where `[[0, 0, 0, 0]]` is a a new datapoint which is classified as belonging to class `1`).
In contrast, unsupervised ML does not require labels during training. [This blog post](https://machinelearningmastery.com/supervised-and-unsupervised-machine-learning-algorithms/) provides some further explanations and examples.
|
How does real world machine learning production systems run?
|
There are many things to consider to have a model in production. The main ones your are asking about are:
- Functionality
- Architecture
# Functionality
For your model to be used in production from a web server, you can host an API which exposes your model.
For example, you have a Flask Python server running, where you map an endpoint (e.g. `GET http://<your_host>/prediction/image.jpg`) to the `predict()` function of your model.
Then you mentioned making it a continuous online-learner. Most classifiers will improve with more data if that data is annotated (i.e. labeled), but for that, you need to manually annotate them and re-feed them to your system and retrain your model. If you could automatically confidently label new data, you wouldn't need to improve your system. So, I would say, some manual labor would be required (labeling), but the rest can be automated. You can add more end-points to your web server, where you can upload more training data, and the system re-trains your model, takes care of versioning and re-loads the latest trained model.
# Architecture
## Storage
You mention `pickle` files and you are afraid that they are too large on disk. However, nowadays, with cloud solutions, this is often not a problem.
You can use Blob-Storage solutions and prices are often very low (e.g. [https://azure.microsoft.com/en-us/services/storage/blobs/](https://azure.microsoft.com/en-us/services/storage/blobs/) will cost $0.002$euros/GB/month).
You can, of course, keep many pickles there, for versioning (recommended). However, if you want to minimize costs, you could only store the latest model.
Further, if your API is used often, you don't want to keep reloading your model every time. It would be better to have it always available in RAM. It is not expensive, again, to host a server with a lot of RAM in the cloud.
## Layout
An architecture layout you can have is:
```
+----------------+ +--------------+
| | | |
| ADMIN SERVER | -------> | BLOB STORAGE |
| | | |
+----------------+ +--------------+
| ^
| |
| +-----------+-----------+
| | |
| +------------------+ +----------------+
| | | | |
| | PREDICT SERVER | | PREDICT SERVER |
| | | | |
| +------------------+ +----------------+
| ^ ^
| | |
| +------------------+
| | | | |
+--------------> | | QUEUE | |
| | | |
+------------------+
```
Here, the `ADMIN SERVER` takes care of all the functionalities of re-training the model and uploading new models to the storage and publishing jobs to the queue for the `PREDICT SERVERS` to fetch the latest models from `BLOB STORAGE`.
The `BLOB STORAGE` holds the models.
The `PREDICT SERVER`s expose your `predict()` funtion, so your model is accessible to other systems. Here, the models are stored in RAM for faster predictions. Depending on the usage of your model, you might want to have $\geq1$ server for predictions. Since you model is persisted on `BLOB STORAGE` and not on your local hard-disk, this is possible, they can all fetch the latest model.
The `QUEUE` is how the `ADMIN SERVER` can communicate with all `PREDICT SERVER`s.
|
114599
|
1
|
114683
| null |
0
|
104
|
As a part of my master's thesis, I am using different ML models for prediction and classification. The problem is I am confused if I should use only the result for a fixed random_state(suppose 10) or use a different random_state each time. (for example, use 3 different random_state and take the mean of the result).
|
same random_state or mean of the different random_state?
|
CC BY-SA 4.0
| null |
2022-09-21T22:52:51.417
|
2022-09-28T14:50:45.857
| null | null |
138575
|
[
"machine-learning",
"classification",
"regression",
"data-science-model"
] |
Yes, ideally, you should run experiments with different random seeds.
Explanation
The reason why it is recommended to use a fixed random seed is reproducibility, i.e. you don`t want to get different results every time you train a model. However, fixing the random seed does not solve the problem that results of any non-deterministic model will depend on the chosen random seed. It only ensures that you (or in this case, also your thesis supervisor) is able to reproduce the results.
But as the authors of [Why Comparing Single Performance Scores Does Not Allow to Draw Conclusions About Machine Learning Approaches](https://arxiv.org/abs/1803.09578) write:
>
[...] there is a high risk that a statistical significance in this type of evaluation is not due to a superior learning approach. Instead, there is a high risk that the difference is due to chance
because
>
Non-deterministic approaches like neural networks can produce models with varying performances and comparing performances based on single models does not allow drawing conclusions about the underlying learning approaches.
and therefore to
>
[...] not submit only a single model, but multiple models trained with different random seed values. Those submissions should not be treated individually. Instead the mean and the standard deviation of test scores should be reported.
Considerations regarding k-fold CV
This does not apply if you perform k-fold cross-validation since the random numbers generator will progress from fold to fold and, therefore, models in each fold be based on different random numbers.
Practical considerations
Having said that, I would also check what validation strategies have been taught in your MSc specifically and what your thesis supervisor(s) think of this (e.g. check papers they have published). Moreover, since this is a master thesis for which you have limited time available you might need to be pragmatic too. If your model takes a week to be trained and there is a larger number of models to be trained then you might need to cut down on the number of experiments per model. If that is the case, I would highlight this as a limitation in your thesis.
|
Why would one crossvalidate the random state number?
|
I personally think that the general idea of optimising your model with different random seeds is not a good idea. There are many other, more important, aspects of the modelling process that you can worry about, tweak and compare before spending time on the effects of random initialisation.
That being said, if you just want to test the effect of random initialisation of model weights on a final validation metric, this could be an approach to do so. Kind of the reverse argument to my point above. If you can show for different random seeds (ceteris paribus: with all other parameters equal) that the final model performs differently, it shows maybe that their is either inconsistency in the model, or a bug in the code even. I would not expect a well-validated model to give hugely differing results if being run with a different random seed, so if it does, it tells me something weird is going on!
|
114628
|
1
|
114766
| null |
0
|
119
|
If I have an image of apple then how can I find the height of an apple using Deep learning?
The photo of an apple is taken from the top view and I want to detect the height of that apple. How to do it?
Are there any papers regarding this?
I have searched a lot but the solution were only giving me length and breadth but not height. So how to determine height?
|
Given an image how to find height of an object?
|
CC BY-SA 4.0
| null |
2022-09-23T06:44:53.137
|
2022-09-29T13:47:13.410
|
2022-09-24T23:51:57.383
|
43000
|
140748
|
[
"machine-learning",
"deep-learning",
"computer-vision",
"object-detection",
"3d-object-detection"
] |
Put the apple next to a penny and capture a top level view. Then by using the known ground truth (since you know dimensions of penny), you can estimate the distance to the ground and distance to the top of the apple. After that you should be able to infer the height of the apple.
If you know the dimensions of the plate with certainty then yes.
|
Calculate image width
|
`cv2.selectROI()` returns a 2D rectangle with `(x, y, width, height)` (see [the Rect constructor](https://docs.opencv.org/4.0.1/d2/d44/classcv_1_1Rect__.html#a5a41149f4b012b9f323b5913454375a1) - that object is created from [selectROI()](https://docs.opencv.org/4.0.1/d7/dfc/group__highgui.html#ga8daf4730d3adf7035b6de9be4c469af5)).
So, if you want to measure the apparent width in pixels, you need `marker[2]`.
|
114636
|
1
|
114641
| null |
0
|
63
|
Assuming I have this dataset:
Label --- %Total
0 -------- 18.53%
1 -------- 8.18%
2 -------- 26.22%
3 -------- 16.46%
4 -------- 8.62%
5 -------- 9.58%
6 -------- 5.88%
7 -------- 6.53%
I could say I have a class imbalance problem ?
Is it mandatory in this case to fix the problem trying to use all the various techniques (resampling, data augmentation, change perf metric etc...) ?
Is there a mathematical formula to get the grade of imbalance severity or something like that to understand if there is a class imbalance problem ?
I think we have to evaluate case by case, the techniques to avoid imbalanced data could even not work at all, there isn't a general rule. Any ideas?
|
How can I say if I have a class imbalance issue in my data?
|
CC BY-SA 4.0
| null |
2022-09-23T11:09:29.017
|
2022-09-24T23:46:27.010
|
2022-09-24T23:46:27.010
|
43000
|
116375
|
[
"machine-learning",
"class-imbalance"
] |
'Imbalance problem' is a mix up of several loosely related issues, mainly these two:
- It's hard to generalize when there's too few of a certain class' samples, especially with lots of dimensions. However, methods like resampling won't help much in this case: in an oversimplified way, that means trying to combat model variance by shifting its bias. There's little you can do aside from gathering more data unless, perhaps, you are only interested in certain class-specific metrics of those few rare classes. Your class distribution does not seem that bad - your model will generalize alright with enough samples regardless of the class ratio.
- Logistic functions underestimating the rare cases' probability. That's basically just bias, resampling / reweighing / threshold selection have mostly the same effect. The latter is the easiest as it does not require retraining, however this is, strictly speaking, a decision making part, which should not be mixed with evaluation stage (there could be more than one decision threshold for different actions etc).
So, the 'ideal' way would be: don't resample at all, evaluate using 'proper' (class independent and threshold independent) metrics, such as logloss, and thus work directly with scores/probabilities (calibrate if needed) up until the decision stage.
In DS context however, you often still need 'intuitive' metrics (based upon confusion matrix), which are threshold sensitive and often class specific. Even then, anything more complex than selecting a threshold upon the precision/recall curve is usually excessive.
|
Handling data imbalance and class number for classification
|
People talk a lot about data imbalance, but in general I think you don't need to worry about it unless your data is really imbalanced (like <1% of one label). 50/200 is fine. If you build a logistic regression model on that dataset, the model will be biased towards the majority class - but if you gave me no information about an input to classify, the prior probability is that the new input is a member of the majority class anyway.
The question you want to be able to answer is whether you are differentiating classes fine - so if you do have a minority class, do NOT use 'accuracy' as a metric. Use something like area under the ROC curve (commonly called AUC) instead.
If your data is really super imbalanced, you can either over-sample the minority class or use something called 'SMOTE', for "Synthetic Minority Over-Sampling Technique", which is a more advanced version of the same thing. Some algorithms also let you set higher weights on minority classes, which essentially incentivizes the model to pay attention to the minority class by making minority-class errors cost more.
To learn to differentiate between lots of classes, I think (a) you will need to have a ton of examples to learn from and (b) a model that's expressive enough to capture class differences (like deep neural network, or boosted decision tree), and (c) use softmax output. If those still don't work, you might try a 'model-free' approach like K-nearest-neighbors, which matches each input to the most similar labeled data. For kNN to work however, you need to have a very reasonable distance metric.
|
114649
|
1
|
114759
| null |
0
|
267
|
I am referring to the documentation [here](https://alkaline-ml.com/pmdarima/modules/generated/pmdarima.arima.CHTest.html#pmdarima.arima.CHTest), but it does not give many examples on how to actually perform the test. I have a pandas dataframe with two columns:
- Column 1 is first day of every week,
- Column 2 is demand, and this data goes back over 150 weeks.
How would I perform a CH test to see if there is any seasonality in my data?
|
How does one perform a Canova-Hansen test in Python?
|
CC BY-SA 4.0
| null |
2022-09-23T21:35:10.683
|
2022-09-28T15:44:04.013
| null | null |
140442
|
[
"time-series",
"pandas",
"forecasting",
"forecast"
] |
In general this could be achieved with the snippet below where you have to replace x with your observations.
However I'm not sure whether the Caonva-Hansen-Test is suitable for weekly observations (m=52) since these sort of tests are usually designed with monthly or quarterly time series in mind.
Therefore it might be better to aggregate your observations to a monthly level (m=12).
```
from pmdarima.arima import CHTest
import numpy as np
x=np.random.normal(size=1000)
CHTest(m=52).estimate_seasonal_differencing_term(x)
```
|
How to automate ANOVA in Python
|
I am not sure ANOVA is the best and easiest way to find correlation between these categorical features and your target. You may see [this great post](https://medium.com/@outside2SDs/an-overview-of-correlation-measures-between-categorical-and-continuous-variables-4c7f85610365) where they propose many other methods along with ANOVA. If you persist to use ANOVA test or Kruskal-Wallis H Test, you need to know how it works to give you that notion of correlation (variation of variance among groups of categoricals). It is nicely explained in that post:
>
ANOVA estimates the variance of the continuous variable that can be
explained through the categorical variable. One need to group the
continuous variable using the categorical variable, measure the
variance in each group and comparing it to the overall variance of the
continuous variable. If the variance after grouping falls down
significantly, it means that the categorical variable can explain most
of the variance of the continuous variable and so the two variables
likely have a strong association. If the variables have no
correlation, then the variance in the groups is expected to be similar
to the original variance.
Once you understand how it works, implementing it and automating it is not difficult. In fact scipy and statsmodels have ANOVA. Check this [post](https://pythonfordatascience.org/anova-python/) out, where they demonstrate in details how to perform ANOVA test on an actual dataset and estimate the correlation between categorical variable and continuous target. It is just a matter of putting these pieces together and change a bit to make it work for your own dataframe.
|
114670
|
1
|
114672
| null |
0
|
62
|
I have a classification model (BERT) that classifies sentences as either question or normal sentences. But whenever a sentence has "how" word, the model chooses "question" class.
How can I solve this issue? (I have a very big dataset.)
|
One word changes everything NLP
|
CC BY-SA 4.0
| null |
2022-09-24T20:25:10.130
|
2022-09-25T09:50:05.723
| null | null |
133184
|
[
"deep-learning",
"nlp",
"transformer",
"bert"
] |
Very likely, the majority of the sentences which contain "how" in your training data are labelled as question. It's probably a problem of representativity of the training set, because otherwise the problem wouldn't be this specific. But note that it's likely that your training data contains other issues a well, possibly there are errors in the labels.
|
NLP : variations of a text without modifying it's meaning
|
Text summarization can be divided into two categories 1. Extractive Summarization and 2. Abstractive Summarization
- Extractive Summarization: These methods rely on extracting several parts, such as phrases and sentences, from a piece of text and stack them together to create a summary. Therefore, identifying the right sentences for summarization is of utmost importance in an extractive method.
- Abstractive Summarization: Abstractive methods select words based on semantic understanding, even those words did not appear in the source documents. It aims at producing important material in a new way. They interpret and examine the text using advanced natural language techniques to generate a new shorter text that conveys the most critical information from the original text.
What you are looking for is abstractive summarisation. Since you are working in R there is a nice library called [lexRank](https://cran.r-project.org/web/packages/lexRankr/lexRankr.pdf) taking an example from [here](https://adamspannbauer.github.io/2017/12/17/summarizing-web-articles-with-r/) would look something like
```
#load needed packages
library(xml2)
library(rvest)
library(lexRankr)
#url to scrape
monsanto_url = "https://www.theguardian.com/environment/2017/sep/28/monsanto-banned-from-european-parliament"
#read page html
page = xml2::read_html(monsanto_url)
#extract text from page html using selector
page_text = rvest::html_text(rvest::html_nodes(page, ".js-article__body p"))
#perform lexrank for top 3 sentences
top_3 = lexRankr::lexRank(page_text,
#only 1 article; repeat same docid for all of input vector
docId = rep(1, length(page_text)),
#return 3 sentences to mimick /u/autotldr's output
n = 3,
continuous = TRUE)
#reorder the top 3 sentences to be in order of appearance in article
order_of_appearance = order(as.integer(gsub("_","",top_3$sentenceId)))
#extract sentences in order of appearance
ordered_top_3 = top_3[order_of_appearance, "sentence"]
> ordered_top_3
[1] "Monsanto lobbyists have been banned from entering the European parliament after the multinational refused to attend a parliamentary hearing into allegations of regulatory interference."
[2] "Monsanto officials will now be unable to meet MEPs, attend committee meetings or use digital resources on parliament premises in Brussels or Strasbourg."
[3] "A Monsanto letter to MEPs seen by the Guardian said that the European parliament was not “an appropriate forum” for discussion on the issues involved."
```
EDIT: How I like to think about abstractive summarisation: Y
Using encoder-decoder architecture (extendended with transformers) for seq2seq problems you can essentially get an embeding of your text, where same sentences can be embedded differently in different context, giving same/similiar output.
|
114694
|
1
|
114695
| null |
2
|
225
|
The question is pretty simple.
In stacking, the predictions of level 0 models are being used as features to train a level 1 model.
However, the predictions of what data? Intuitively it makes more sense to predict the test set and use those results to train the final classifier.
I am not sure whether this results in data leakage, I don't think this results to data leakage (since the final classifier has only information that the initial ones do, ie. only from the train data - it doesn't know if those predictions are good or not).
Is this reasoning correct?
|
Stacking: Use predictions of train or test to create features for level 1 classifier
|
CC BY-SA 4.0
| null |
2022-09-26T10:25:07.817
|
2022-09-27T10:48:07.203
| null | null |
79520
|
[
"machine-learning",
"classification",
"data-leakage",
"stacking"
] |
I'm not sure if there's any standard about this, but I usually proceed by splitting the training set into two parts A and B:
- A is used as training set for level 0 models
- B is used as test set for the level 0 models and as training set for the level 1 model.
As usual, the final test set made of fresh instances is used to evaluate the final model, made of stacking the level 0 models and level 1 model.
[added] You're right that there would be data leakage if one were using the same data for training and testing the level 0 models. This would be especially bad, because it means that the level 1 model would expect 'very good' level 0 predictions (since they have been seen during training), and obviously the 'production' level 0 predictions would not be as good and therefore the level 1 model would be completely overfit.
One can also use nested cross-validation to the same effect.
|
Building Stacking machine learning model using three base classifiers
|
No, in generally speaking, even minor changes should affect your performance. Changing your meta-model should normally have a visible impact in your model's performance.
Two things you can try:
- Check for any problems in your code.
- Maybe your test set size is really small. For example if you have 5 test samples, it isn't difficult for all models to get 4/5 (i.e. 80% accuracy). As your test size increases so should the variance of your models' performance.
|
114704
|
1
|
114829
| null |
1
|
86
|
It is well known that Random Projection (RP) is tightly linked to Locality Sensitive Hashing (LSH). My goal is to cluster a large number of points lying in a d-dimensional Euclidean space, where $d$ is very large.
---
Questions: Does it make sense to cluster the points via LSH after having reduced the dimensionality of their input space by using first RP? Why yes/no? Is there any redundancy in the combined use of RP as dimensionality reduction method before LSH as clustering method?
|
Clustering by using Locality sensitive hashing *after* Random projection
|
CC BY-SA 4.0
| null |
2022-09-26T18:03:19.657
|
2022-09-30T23:18:02.030
| null | null |
140844
|
[
"machine-learning",
"clustering",
"dimensionality-reduction",
"search",
"randomized-algorithms"
] |
It makes sense to reduce the dimensionality with Random Projection (RP) and then cluster with Locality Sensitive Hashing (LSH). One of the primary ways of improving LSH is running it multiple times and taking the consensus clusters. That process would be much faster on fewer dimensions.
As far as redundancy - both methods rely on randomness. There is a small chance that the sequential randomness could yield non-robust results. If possible, run the process multiple times to find consistent results.
|
Document embedding vs locality sensitive hashing for document clustering
|
Locality sensitive hashing (LSH) is a search technique. With it, similar documents get the same hash with higher probability than dissimilar documents do. LSH is designed to allow you to build lookup tables to efficiently search large data sets for items similar to a given item. It is also a probabilistic method in that it can generate false positives and false negatives. While there are ways to train LSH, most LSH is untrained. That's because LSH has been studied more in the search setting than the machine learning setting.
Embeddings are a machine learning technique to capture semantic information for use in some downstream task, such as clustering or classification. Typically semantically similar items get similar (but not the same) embeddings. Embeddings are trained from data. There are many unsupervised algorithms (word2vec, glove) and there are supervised methods too (auto-encoders, hidden layer output from deep models).
Embeddings can be used to map items into a space where near neighbor search would find semantically similar items. However, on large data sets you would still need to index the data to search efficiently, which raises the possibility of doing LSH on embeddings. That way you get the benefit of a trained model that learns the distribution of your data set and the benefit of a fast lookup table.
|
114708
|
1
|
114711
| null |
1
|
484
|
I have read [this article](https://towardsdatascience.com/how-to-apply-k-means-clustering-to-time-series-data-28d04a8f7da3) on towardsdatascience and they teach how to cluster time series using the DTW distance and the TimeSeriesKMeans from the tslearn.clustering library.
I also read the [official documentation](https://tslearn.readthedocs.io/en/stable/gen_modules/clustering/tslearn.clustering.TimeSeriesKMeans.html) and I found a note.
>
Notes
If metric is set to “euclidean”, the algorithm expects a dataset of
equal-sized time series.
This suggest me that for other metrics (like dtw for example) the method works with different sized time series.
I'm currently working on time-series data and I want to check if I can get some interesting information about my data using this method.
This is how I constructed my curves. I have a dataframe called "relevant_figures" that it contains the relevant information in order to construct the curves. Then I proceed as follows:
```
X = []
for _,row in relevant_figures.iterrows():
input_time = row['InputTime']
output_time = row['OutputTime']
ts = weights_df.loc[input_time : output_time]['weight'].copy()
X.append(ts)
```
When I try the method
```
TimeSeriesKMeans(n_clusters=3, metric="dtw").fit(X)
```
It throws a ValueError
>
Name: peso, Length: 120, dtype: float64]. Reshape your data either
using array.reshape(-1, 1) if your data has a single feature or
array.reshape(1, -1) if it contains a single sample.
However I can't reshape in order to construct an array because every ts has different lengths. So reshaping does not work. What should I do? Thanks in advance
|
clustering time series with different sized time series
|
CC BY-SA 4.0
| null |
2022-09-27T01:18:50.747
|
2022-09-27T04:33:31.017
|
2022-09-27T01:21:06.577
|
131147
|
131147
|
[
"python",
"time-series",
"clustering",
"dynamic-time-warping"
] |
Try using the [to_time_series_dataset](https://tslearn.readthedocs.io/en/stable/gen_modules/utils/tslearn.utils.to_time_series_dataset.html) function in the tslearn.utils module. This takes a list of lists as input and returns the data formatted as a numpy array, e.g.:
```
from tslearn.utils import to_time_series_dataset
X = to_time_series_dataset([[1, 2, 3, 4], [1, 2, 3], [2, 5, 6, 7, 8, 9]])
```
It looks like it pads the shorter time series with `nan`'s to fit them into the array.
|
clustering multivariate time-series datasets
|
For most clustering approaches, first you need to choose a similarity measure. Some common default ones for raw time series are Euclidean distance and [Dynamic Time Warping](https://en.wikipedia.org/wiki/Dynamic_time_warping) (DTW).
When you have computed the similarity measure for every pair of time series, then you can apply [hierarchical clustering](https://en.wikipedia.org/wiki/Hierarchical_clustering), [k-medoids](https://en.wikipedia.org/wiki/K-medoids) or any other clustering algorithm that is appropriate for time series (not [k-means](https://en.wikipedia.org/wiki/K-means_clustering)!, see [this](https://stats.stackexchange.com/a/131337/40048)).
Update: if the number of time series (along with their size) makes it computationally not acceptable to compute pairwise distances, then one option can be to extract features from each time series, and then use such features as proxies for the time series in the clustering process. Some examples of such features are maximum value, number of peaks, mean value. There are libraries like [tsfresh](https://github.com/blue-yonder/tsfresh) in Python that are meant to easily extract such kind of features from time series. With these features, then any clustering approach like [k-means](https://en.wikipedia.org/wiki/K-means_clustering) can be applied.
|
114764
|
1
|
114821
| null |
2
|
139
|
[EDIT] The question now has been solved, I updated the calculations bellow.
I've been trying to understand the math behind shap values. So far I understand all the concepts in SHAP but not to get to the shap values that are in this example (coming from the last example of [https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/Understanding%20Tree%20SHAP%20for%20Simple%20Models.html](https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/Understanding%20Tree%20SHAP%20for%20Simple%20Models.html) ).
What I've got so far is :
Assume you have to explain an observation where x0=0,x1=0,x2=0,x3=0. So x is a vector of zeros for this case. Since we know x2 and x3 are NOT in the tree we wont do any calculation with them (if it is a GBM tree based model, simply parse if the variable is in any of the trees of the ensemble). To proove this is correct just calculate shap without this values or with a vector of 100 zeros, shap importances greater than 0 won't change (x0 and x1 and local accuracy property will stil hold (local accuracy means "sum of contributions + expected value = predicted output ).
## Calculate all possible expected values in the tree (remember that the predicted values with a set of features for the tree is the expected value conditioned on the features).
## STEP 1 EXPECTED VALUES: Expected values (not conditioned and conditioned)
E(y) = 0.75
E(y|x0=0) = 0
E(y|x0=0,x1=0) = 0
E(y|x1=0)=0 * 0.5+ 0.5 * 1.0=0.5
## STEP 2: Now calculate contributions
contribution_adding_x0_to_null_model= E(y|x0=0) - E(y) = 0 - 0.75 = -0.75
contribution_having_x1_to_null_model = E(y|x1=0) - E(y) =0.5 - 0.75 = -0.25
contribution_adding_x0_to_x1= E(y|x0=0,x1=0) - E(y|x1=0) = 0 - 0.5 = -0.5
contribution_adding_x1_to_x0 = E(y|x0=0,x1=0) - E(y|x0=0) = 0 - 0 = 0
### This would be the average of all possible combinations (superset) of having this feature and not having that?
** UPDATE 2 **, correct calculation
shap_x0 = mean (contribution_adding_x0_to_null_model , contribution_adding_x0_to_x1 ) = mean(-0.75,-0.5) = (-0.75-0.5)/2=-0.675
shap_x1 = mean( contribution_adding_x1_to_null_model, contribution_adding_x1_to_x0 ) = mean(-0.25,0)=-0.125
Proof this is correct, see shap package output bellow and also (local accuracy):
Prediction when all x are zeros (see the tree leaf in the left)
prediction = E(y|x0=0,x1=0,x2=0,x3=0) = 0
shap_x1+shap_x0+expected_value = -0.125-0.675+0.75=0=prediction
What am I forgetting about? [DONE]
Code for generating the tree:
```
# build data
N = 100
M = 4
X = np.zeros((N,M))
X.shape
y = np.zeros(N)
X[:N//2, 0] = 1
X[:1 * N//4, 1] = 1
X[N//2:3 * N//4, 1] = 1
y[:1 * N//4] = 1
y[:N//2] += 1
# fit model
and_fb_model = sklearn.tree.DecisionTreeRegressor(max_depth=2)
and_fb_model.fit(X, y)
# draw model
dot_data = sklearn.tree.export_graphviz(and_fb_model, out_file=None, filled=True, rounded=True, special_characters=True)
graph = graphviz.Source(dot_data)
graph
```
[](https://i.stack.imgur.com/PKZ0E.png)
Explain the model for all zeros or all ones:
```
xs = [np.ones(M), np.zeros(M)]
for x in xs:
print()
print(" x =", x)
print("shap_values =", shap.TreeExplainer(and_fb_model).shap_values(x))
```
```
>>> Output of this code is:
>>> x = [1. 1. 1. 1.]
>>> shap_values = [0.875 0.375 0. 0. ]
--- THIS ONE IS THE CASE OF THE EXAMPLE I GAVE, x is a vector of zeros: x=(x0,x1,x2,x3) =(0,0,0,0)
>>> x = [0. 0. 0. 0.]
>>> shap_values = [-0.625 -0.125 0. 0. ]
```
Thanks in advance!
# UPDATE 3: Additional if anyone is interested: SOLUTION WHEN vector x = (1,1,1,1)
## STEP1 EXPECTATIONS
E(y) = 0.75
E(y|x0=1) = 1.5
E(y|x0=1,x1=1) = 2
E(y|x1=1)= 0.5*0 + 0.5+2 = 1.0
## STEP2 Contributions
contribution_adding_x0_to_null_model = E(y|x0=1) - E(y) = 1.5 - 0.75 = 0.75
contribution_having_x1_to_null_model = E(y|x1=1) - E(y) = 1.0 - 0.75 = 0.25
contribution_adding_x0_to_x1 = E(y|x0=1,x1=1) - E(y|x1=1) = 2 - 1.0 = 1.0
contribution_adding_x1_to_x0 = E(y|x0=1,x1=1) - E(y|x0=1) = 2-1.5=0.5
shap_x0 = mean(contribution_adding_x0_to_null_model,contribution_adding_x0_to_x1) = mean(0.75,1.0)=0.875
shap_x1 = mean(contribution_having_x1_to_null_model,contribution_adding_x1_to_x0) = mean(0.25,0.5)=0.375
## STEP 4: Check local accuracy property:
When x=1 , the prediction is:
prediction = 2.0
The sum of shap values is:
shap_x0 +shap_x1 + expected_value = 0.875+0.375+0.75=2.0
It definitely hodls.
|
Can you do the math for this simple treeSHAP example (decisionTree)?
|
CC BY-SA 4.0
| null |
2022-09-28T17:25:41.847
|
2022-10-03T18:35:04.743
|
2022-10-03T18:35:04.743
|
110114
|
110114
|
[
"machine-learning",
"python",
"shap"
] |
You seem to have entirely the right idea, you just miscalculated the second and fourth contributions you listed. Below are the corrected calculations, bolded to indicate where a change has been made:
contribution_having_x0 = E(y|x0=0) - E(y) = 0 - 0.75 = -0.75
contribution_having_x1 = E(y|x1=0) - E(y) = 0.5 - 0.75 = -0.25
contribution_adding_x0_to_x1 = E(y|x0=0,x1=0) - E(y|x1=0) = 0 - 0.5 = -0.5
contribution_adding_x1_to_x0 = E(y|x0=0,x1=0) - E(y|x0=0) = 0 - 0 = 0
After this, averaging the two contributions when adding a feature gives the same shap values as reported by the package. (The mistake in the formula for the second equation gives rise to your mistaken simplification of it.)
|
How to interpret a decision tree correctly?
|
Let me evaluate each of your observations one by one, so that it would be more clear:
>
The dependent variable of this decision tree is Credit Rating which
has two classes, Bad or Good. The root of this tree contains all 2464
observations in this dataset.
If `Good, Bad` is what you mean by credit rating, then Yes. And you are right with the conclusion that all the 2464 observations are contained in the root of the tree.
>
The most influential attribute to determine how to classify a good or
bad credit rating is the Income Level attribute.
Debatable Depends on how you consider something to be influential. Some might argue that the number of cards might be the most influential, and some might agree with your point. So, you are both right and wrong here.
>
The majority of the people (454 out of 553) in our sample that had a
less than low income also had a bad credit rating. If I was to launch
a premium credit card without a limit I should ignore these people.
Yes, but it would also be better if you consider the probability of getting a bad credit from these people. But, even that would turn out to be NO for this class, which makes your observation correct again.
>
If I were to use this decision tree for predictions to classify new
observations, are the largest number of class in a leaf used as the
prediction? E.g. Observation x has medium income, 7 credit cards and
34 years old. Would the predicted classification for credit rating =
"Good"
Depends on the probability. So, [calculate the probability](https://social.msdn.microsoft.com/Forums/sqlserver/en-US/97c9ce39-024f-450f-8b21-a2d2961d8be7/decision-trees-how-is-prediction-probability-calculated?forum=sqldatamining) from the leaves and then make a decision depending on that. Or much simpler, use a library like the Sklearn's decision tree classifier to do that for you.
>
Another new observation could be Observation Y, which has less than
low income so their credit rating = "Bad"
Again, same as the explanation above.
>
Is this the correct way to interpret a decision tree or have I got
this completely wrong?
Yes, this is a correct way of interpreting decision trees. You might be tempted to sway when it comes to selection of influential variables, but that is dependant on a lot of factors, including the problem statement, construction of the tree, analyst's judgement, etc.
|
114769
|
1
|
114784
| null |
1
|
62
|
In the docs: [https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)
it is explained that `max_features` is ordered by term frequency across the corpus. Why not use the idf?
|
Why is max_features ordered by term frequency instead of inverse document frequency
|
CC BY-SA 4.0
| null |
2022-09-28T18:06:25.887
|
2022-09-29T10:34:05.757
| null | null |
139935
|
[
"nlp",
"scikit-learn",
"tfidf"
] |
The reason is probably that using the top IDF features would mean selecting the least frequent words, in particular the ones which appear only once and are very frequent. These rare words should usually be removed because they are often due to chance and anyway are unlikely to appear again, therefore these are bad features likely to cause overfitting.
In other words, it's always better for the features to be frequent so their statistical relations with other variables (especially the target) can be estimated reliably by the algorithm. Picking the top IDF features would do the opposite: take the least reliable statistical information into account.
|
Weights for keywords in a set of documents using Term Frequency and Inverse Document Frequency
|
TF - IDF stands for `term frequency–inverse document frequency`
TF counts the `frequency of a term / total #terms` in a given document. For each term in a document, this value changes.
IDF counts the log of ratio of `total document / term appearing in #documents` . This value is constant for a given unique term. Greater the idf value for a term, higher its significance.
Example:
>
Document 1: This is a sample example.
Document 2: This is another example.
Lets calculate for term = "is":
TF(is, Document 1) = 1/5
TF(is, Document 2) = 1/4
IDF(is) = log(2/2) = 0
TFIDF = TF*IDF
TFIDF(is, document 1) = (1/5)*0 = 0
TFIDF(is, document 2) = (1/4)*0 = 0
It means that the term "is" is not a significant term in the list of documents(corpus).
Lets consider term = "another"
TF(another, document 1) = 0/5 = 0
TF(another, document 2) = 1/4
IDF(another) = log(2/1) = 0.301
TFIDF(another, document 1) = 0*0.301 = 0
TFIDF(another, document 2) = (1/4)*0.301
You can observe from both examples that TF varies per document while IDF is constant.
You can convert your entire 270 documents into term document matrix.
Demo in python:
```
import pandas as pd
from sklearn.feature_extraction.text import TfidfVectorizer
d = pd.Series(['this is a sample example','this is another example'])
df = pd.DataFrame(d)
tfidf_vectorizer = TfidfVectorizer(analyzer='word', min_df=0)
# if you want, say only top 2 features(terms)
# tfidf_vectorizer = TfidfVectorizer(analyzer='word', min_df=0, max_features=2, max_df = 3)
# Terms with given below:
# occurred in too many documents (max_df, tfidf score = 3)
# occurred in too few documents (min_df, tfidf score = 0)
# cut off by feature selection (max_features, tfidf score = 2).
tfidf = tfidf_vectorizer.fit_transform(df[0])
print tfidf_vectorizer.vocabulary_
# output: {u'this': 4, u'sample': 3, u'is': 2, u'example': 1, u'another': 0}
print tfidf_vectorizer.idf_
# output(constant): [ 1.40546511 1. 1. 1.40546511 1. ]
print tfidf
# output:
#(0, 1) 0.448320873199 Document 1, term = example
#(0, 3) 0.630099344518 Document 1, term = sample
#(0, 2) 0.448320873199 Document 1, term = is
#(0, 4) 0.448320873199 Document 1, term = this
#(1, 0) 0.630099344518 Document 2, term = another
#(1, 1) 0.448320873199 Document 2, term = example
#(1, 2) 0.448320873199 Document 2, term = is
#(1, 4) 0.448320873199 Document 2, term = this
```
Source:
[https://en.wikipedia.org/wiki/Tf%E2%80%93idf](https://en.wikipedia.org/wiki/Tf%E2%80%93idf)
[http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html](http://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html)
|
114770
|
1
|
114773
| null |
3
|
282
|
I have the following code:
```
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.cluster import KMeans
import pandas as pd
sentences = ["I have the ability", "I have the weakness", "I have the capability", "I have the power"]
tfidf = TfidfVectorizer(max_features=300)
tfidf.fit(sentences)
X = tfidf.transform(sentences)
k = 2
model = KMeans(n_clusters=k, random_state=1)
model.fit(X)
print(pd.DataFrame(columns=["sentence"], data=sentences).join(pd.DataFrame(columns=["cluster"], data=model.labels_)))
```
The output looks like this:
|index |sentence |cluster |
|-----|--------|-------|
|0 |I have the ability |0 |
|1 |I have the weakness |0 |
|2 |I have the capability |0 |
|3 |I have the power |1 |
As you can see "I have the ability", "I have the weakness", "I have the capability" were grouped in the same cluster (cluster 0) and "I have the power" was grouped into a separate cluster. I think they were grouped randomly and it can't tell which sentences actually mean the same thing. I want a way to be able to group "I have the ability", "I have the capability", and "I have the power" together by specifying that ability, capability and power are synonyms. So basically mapping all words to their synonyms. Is there an existing package for this?
|
Is there a way to map words to their synonyms in tfidf?
|
CC BY-SA 4.0
| null |
2022-09-28T18:54:30.300
|
2022-09-29T20:28:15.773
|
2022-09-28T18:57:51.567
|
139935
|
139935
|
[
"nlp",
"scikit-learn",
"nltk",
"tfidf",
"spacy"
] |
TfIdf vectors require much more data than that to be useful, but also don't give you the ability to identify synonyms. To do that with vectors and the amount of data you're working with, you'll need a pre-trained vector vocabulary. GloVe vectors are a popular choice to start with, but there will be others you can find and play with that may work better for your explicit purpose.
Note that if you don't limit yourself to vector-based approaches, there are many classical approaches to this problem. [WordNet](https://wordnet.princeton.edu/) would probably be the first thing I reach for here.
|
Word2Vec and Tf-idf how to combine them
|
- Word2Vec algorithms (Skip Gram and CBOW) treat each word equally,
because their goal to compute word embeddings. The distinction
becomes important when one needs to work with sentences or document
embeddings; not all words equally represent the meaning of a
particular sentence. And here different weighting strategies are
applied, TF-IDF is one of those successful strategies.
- At times, it does improve quality of inference, so combination is
worth a shot.
- Glove is a Stanford baby, which has often proved to perform better. Can
read more about Glove against Word2Vec here, among many other
resources available online.
|
114774
|
1
|
114776
| null |
0
|
531
|
[](https://i.stack.imgur.com/fAdWC.png)'''
from sklearn.ensemble import RandomForestClassifier
forest = RandomForestClassifier(n_estimators = 500, max_depth = None, min_samples_split=2, min_samples_leaf =1,
bootstrap = True, random_state=0)
forest = forest.fit(X_train, y_train)
print(forest.score(X_test, y_test))
'''
|
when I run Random Forest classification model then at every rows of my train data set show this error (ValueError: could not convert string to float:)
|
CC BY-SA 4.0
| null |
2022-09-28T23:45:07.597
|
2022-09-29T00:27:31.763
|
2022-09-28T23:49:31.527
|
140857
|
140857
|
[
"python",
"regression",
"pandas",
"predictive-modeling",
"random-forest"
] |
The error message is not lying to you :) It cannot convert the string "one favourite christmas gifts year love" to a float.
`RandomForestClassifier` (as most scikit-learn models) requires its inputs to be numeric. It does not know how to handle strings of text. Your training data has at least one column that contains string values. When the model tries to convert the training data to numeric values, the error is thrown when it encounters a string.
You need to either encode the string values as numbers (e.g. with a text embedding model like Word2Vec) or drop the columns containing strings prior to training.
|
Random Forest - ValueError: Input contains NaN, infinity or a value too large for dtype('float32')
|
You are using np.nan_to_num(x_train) which would convert the null values to zeroes and also will take care of infinites. But you are not assigning back.
can you try x_train = np.nan_to_num(x_train) and similar to y_train as well?
I just test this with one example:
```
a = np.array([[1,np.nan,3],[np.nan, 0, np.nan]])
a=np.insert(a, a.shape[0],[[1, np.nan, 1]], axis=0)
```
when I print a what I see is?
```
array([[ 1., nan, 3.],
[nan, 0., nan],
[ 1., nan, 1.]])
```
when I do this->
```
np.nan_to_num(a)
```
I get
```
array([[1., 0., 3.],
[0., 0., 0.],
[1., 0., 1.]])
```
But when I print a again, the nulls are still there. Hence do the assignment hope that solves your problem.
|
114805
|
1
|
114809
| null |
1
|
52
|
I have a dataset consisting of numerical features and categorical features. I want train the training set using SVM. SVM is a quadratic optimization algorithm. I would like to know the how SVM works on categorical data. Can anyone share any references, links to research papers, or weblink to describe the process?
I am also looking forward to know the theory behind handling categorical data using SVM.
|
working principle of Support Vector Machine
|
CC BY-SA 4.0
| null |
2022-09-30T05:19:51.230
|
2022-09-30T10:56:30.837
|
2022-09-30T10:56:30.837
|
43000
|
63745
|
[
"classification",
"svm",
"categorical-data"
] |
To understand an algorithm very well, I use to study thoroughly the original paper, to understand the original mindset in creating it and the mathematical logic.
[http://image.diku.dk/imagecanon/material/cortes_vapnik95.pdf](http://image.diku.dk/imagecanon/material/cortes_vapnik95.pdf)
In parallel, I play with interactive demonstrators to check different use cases and test the limits.
For instance:
[https://jgreitemann.github.io/svm-demo](https://jgreitemann.github.io/svm-demo)
[https://cs.stanford.edu/~karpathy/svmjs/demo/](https://cs.stanford.edu/%7Ekarpathy/svmjs/demo/)
[https://dash.gallery/dash-svm/](https://dash.gallery/dash-svm/)
|
Mathematical formulation of Support Vector Machines?
|
Your understandings are right.
>
deriving the margin to be $\frac{2}{|w|}$
we know that $w \cdot x +b = 1$
If we move from point z in $w \cdot x +b = 1$ to the $w \cdot x +b = 0$ we land in a point $\lambda$. This line that we have passed or this margin between the two lines $w \cdot x +b = 1$ and $w \cdot x +b = 0$ is the margin between them which we call $\gamma$
For calculating the margin, we know that we have moved from z, in opposite direction of w to point $\lambda$. Hence this margin $\gamma$ would be equal to $z - margin \cdot \frac{w}{|w|} = z - \gamma \cdot \frac{w}{|w|} =$ (we have moved in the opposite direction of w, we just want the direction so we normalize w to be a unit vector $\frac{w}{|w|}$)
Since this $\lambda$ point lies in the decision boundary we know that it should suit in line $w \cdot x + b = 0$
Hence we set is in this line in place of x:
$$w \cdot x + b = 0$$
$$w \cdot (z - \gamma \cdot \frac{w}{|w|}) + b = 0$$
$$w \cdot z + b - w \cdot \gamma \cdot \frac{w}{|w|}) = 0$$
$$w \cdot z + b = w \cdot \gamma \cdot \frac{w}{|w|}$$
we know that $w \cdot z +b = 1$ (z is the point on $w \cdot x +b = 1)$
$$1 = w \cdot \gamma \cdot \frac{w}{|w|}$$
$$\gamma= \frac{1}{w} \cdot \frac{|w|}{w} $$
we also know that $w \cdot w = |w|^2$, hence:
$$\gamma= \frac{1}{|w|}$$
Why is in your formula 2 instead of 1? because I have calculated the margin between the middle line and the upper, not the whole margin.
>
How can $y_i(w^Tx+b)\ge1\;\;\forall\;x_i$?
We want to classify the points in the +1 part as +1 and the points in the -1 part as -1, since $(w^Tx_i+b)$ is the predicted value and $y_i$ is the actual value for each point, if it is classified correctly, then the predicted and actual values should be same so their production $y_i(w^Tx+b)$ should be positive (the term >= 0 is substituded by >= 1 because it is a stronger condition)
The transpose is in order to be able to calculate the dot product. I just wanted to show the logic of dot product hence, didn't write transpose
---
For calculating the total distance between lines $w \cdot x + b = -1$ and $w \cdot x + b = 1$:
Either you can multiply the calculated margin by 2 Or if you want to directly find it, you can consider a point $\alpha$ in line $w \cdot x + b = -1$. then we know that the distance between these two lines is twice the value of $\gamma$, hence if we want to move from the point z to $\alpha$, the total margin (passed length) would be:
$$z - 2 \cdot \gamma \cdot \frac{w}{|w|}$$ then we can calculate the margin from here.
derived from ML course of UCSD by Prof. Sanjoy Dasgupta
|
114807
|
1
|
114866
| null |
0
|
53
|
I have a dataset with 120 features and 5000 instances. The dataset is combination of categorical and numerical values. It is a tabular dataset. My problem is a binary classification problem. I trained my dataset with all classic classification algorithms like Naive Bayes, Bayesian net, SVM, MLP, Random forest, Logistic regression etc. I would like to know is there any algorithms available in the machine learning field which is new and not the classic one and can be implemented using a tabular dataset.
I heard about Convolution neural network, deep neural network etc but I believe they are used in image data not tabular data.
|
Machine learning algorithms for tabular dataset
|
CC BY-SA 4.0
| null |
2022-09-30T05:27:53.703
|
2022-10-03T09:30:52.053
| null | null |
63745
|
[
"neural-network",
"classification",
"convolutional-neural-network",
"binary-classification"
] |
You can try Generalized Additive Models (GAM). It models the response variable, $y$, as a sum of functions of individual features $f_i(x_i)$: $y = \sum\limits_if_i(x_i)$. You don't need to provide $f_i$, the algorithm finds learns them from the data. By analyzing the $f_i$, you can see, which feature contributes significantly, and in which way. You can even fit the $f_i$ to simple analytic functions and get an analytic dependence of $y$ on $x_i$.
Links for GAM: [GAM in wikipedia](https://en.wikipedia.org/wiki/Generalized_additive_model), [GAM in R](https://noamross.github.io/gams-in-r-course/), [GAM in Python](https://codeburst.io/pygam-getting-started-with-generalized-additive-models-in-python-457df5b4705f),
[ClassificationGAM in Matlab](https://www.mathworks.com/help/stats/classificationgam.html), as well as [here](https://fromthebottomoftheheap.net/tag/gam/), [here](https://environmentalcomputing.net/statistics/gams/), [here](https://multithreaded.stitchfix.com/blog/2015/07/30/gam/), and [here](https://datascienceplus.com/generalized-additive-models/).
A related method is Alternating Conditional Expectations (ACE) from this paper: [link](https://www.tandfonline.com/doi/abs/10.1080/01621459.1985.10478157). I wrote a blog post about it [here](https://vladgladkikh.wordpress.com/2019/01/21/learning-parametric-relationships-from-data-using-ace/).
These methods are not new, but I have a feeling that you didn't try them.
I also recommend browsing through methods in weka. It has some interesting, non-mainstream algorithms, such as [classifier for learning functional trees](https://weka.sourceforge.io/packageMetaData/functionalTrees/index.html), [HotSpot](https://weka.sourceforge.io/packageMetaData/hotSpot/index.html), [alternating model trees](https://weka.sourceforge.io/packageMetaData/alternatingModelTrees/index.html), [alternating decision trees](https://weka.sourceforge.io/packageMetaData/alternatingDecisionTrees/index.html), and many other.
|
Machine Learning applied to database design
|
This is such an interesting question. I suppose that it is possible but you would have to answer some more questions before you can actually get help with modeling something.
- Are you looking for it to learn SQL or NoSQL?
- You'd have to make a distinction between something that can learn relational database design versus something that learns how to be a DBA and work in a particular language. For example, relational databases are based on theory (and relatively straightforward) but how you implement certain things in Oracle or SQL Server (as examples) will vary greatly. Or maybe you're looking for a particular type of design like data warehousing (star patterns, etc). Whichever approach you choose would have a profound effect on the type of model you are going to build.
- There are some pitfalls that you would have to account for. A relationship based on text columns is acceptable design, but a relationship based on integer hashes of those same text columns is much better. How a model would account for something like this is unknown to me.
- Relating to the item above, you would have to come up with some metric for the success of your model. Is it the levels of relational design that it can reach? Is it some hardware performance benchmark? Is it some level of cognition that your model can reach for extremely complex designs?
I think that once you answer these types of questions you will be in a much better position to start model development.
|
114819
|
1
|
114824
| null |
0
|
34
|
I have 3 cases:
- I have a classification model that will be used to classify cats and dogs. On my train data dog pictures has a watermark on them, but cat pictures don't. The problem is: Whenever I have a watermark on a cat picture, the model will predict the cat picture as a dog picture
- I have another classification model that classifies questions and normal sentences. But whenever I have the "how" word in my normal sentence, the model will classify it as "question"
- I have a prediction model. I have 5 columns but column number 3 is very important. I mean the importance of that column is very high. But my model cannot understand it.
All of those problems have 1 common problem. The importance of "something" or "feature" is being misunderstood by models. How these kinds of problems can be solved?
|
Increasing/Decreasing importance of feature/thing in ML/DL
|
CC BY-SA 4.0
| null |
2022-09-30T16:59:05.827
|
2022-10-03T16:23:30.507
|
2022-09-30T20:51:11.080
|
133184
|
133184
|
[
"machine-learning",
"deep-learning",
"nlp",
"feature-selection",
"image-classification"
] |
I would not say these models "misunderstand" anything. They simply learn from the data provided based on their inductive biases. I hypothesize that all three cases might be caused by the chosen (train) datasets:
- In case 1 the train data is not representative of the test or deployment data since only the train data has watermarks on an image if and only if it shows a dog. If that is not the case for your test or deployment data then you need to adjust your train data accordingly to remove this artifact.
- In case 2 I suggest to check the distribution of the word "how" in questions and non-questions in your train data. It might be that "how" almost exclusively occurs in questions which would, again, be a problem with the dataset not stemming from the same distribution as the data you run inference on. If that is not the case I'd check if your model cannot differentiate between the word "how" appearing in in a question vs. a non-question. If that is the case, a different model type might be more suitable.
- In case 3 it is unclear to me how you derive the importance of "column 3". It might, again, be a problem with the train dataset which, in this case, might not present that feature as very important. Alternatively, it could be that the chosen model is not able to learn the association between that feature and the target (simple example: there is a non-linear association but the model is linear).
In summary, case 1 can be handled by feeding a different train dataset. Case 2 and 3 might require different train datasets but might alternatively require different models if the problem is not with the datasets.
|
Feature Importance
|
Lets first look at how the algorithm for permutation importance works. As per the documentation:
>
To avoid re-training the estimator we can remove a feature only from
the test part of the dataset, and compute score without using this
feature. It doesn’t work as-is, because estimators expect feature to
be present. So instead of removing a feature we can replace it with
random noise - feature column is still there, but it no longer
contains useful information. This method works if noise is drawn from
the same distribution as original feature values (as otherwise
estimator may fail). The simplest way to get such noise is to shuffle
values for a feature, i.e. use other examples’ feature values - this
is how permutation importance is computed.
Now, the answer to your question is that all though the feature 3 provide important information (cv score second best in scenario 2), that information can also be captured using the rest 9 features combined (so it is useless in scenario 1).
While using multiple features, a feature is important if the model gains any new insights from it which the rest of the features cannot provide.
You can interpret this from the "Algorithm" part of [ELI5 Permutation Importance](https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html).
|
114825
|
1
|
114903
| null |
0
|
64
|
I need to apply some filtering on a data frame using pandas.
Basically my data frame has the following column:
- ID - The row id of the transaction
- Timestamp - object was transformed to datetime format
- Amount - float
The data frame consists of more than 100k transactions. I want to filter out all of transactions that have the same amount, and occur within a minute of each other. In a single minute I can have 10 transactions, and all of the 10 will be filtered out (These transactions will be moved to a new data frame called Duplicates for example).
```
ID TMSP amount
0 2019-01-01 00:01:11 89
1 2019-01-01 00:01:17 89
2 2019-01-01 00:02:49 238
3 2019-01-01 00:03:13 238
7 2019-01-01 00:08:46 117
```
As an example in the above records, we will be filtering out the first four records.
Logically we have to create a loop, go through the records (i+1 and i), compare the amount and time difference, and if it matches the conditions, the i+1 row will be moved to the new data frame. Is there any other methods that we can use in pandas that could do some sort of grouping based on several conditions?
|
Filter out transactions occurring within a timeframe with the same amount
|
CC BY-SA 4.0
| null |
2022-09-30T20:57:15.903
|
2022-10-04T14:54:41.570
|
2022-09-30T22:07:10.303
|
75157
|
141028
|
[
"python",
"pandas"
] |
welcome to DataScience Exchange. It would be good to have more data to explore the solution, but I've came up with one and it might work as intended.
```
import pandas as pd
data = {
"id": [0, 1, 2, 3, 7],
"time": [pd.to_datetime("2019-01-01 00:01:11"),
pd.to_datetime("2019-01-01 00:01:17"),
pd.to_datetime("2019-01-01 00:02:49"),
pd.to_datetime("2019-01-01 00:03:13"),
pd.to_datetime("2019-01-01 00:08:46")
],
"amount": [89, 89, 238, 238, 117]
}
df_dup = (
df
.groupby("amount")
.agg({"time": ["max", "min"], "id": list})
.assign(time_diff=lambda df: df["time"]["max"] - df["time"]["min"],
time_diff_in_seconds=lambda df: df["time_diff"].apply(lambda x: x.seconds),
n_ids=lambda df: df["id"]["list"].apply(lambda x: len(x))
)
.loc[lambda df: (df["time_diff_in_seconds"] <= 60) & (df["n_ids"] > 1)]
)
ids_to_filter_out = df_dup["id"]["list"].explode().values.tolist()
```
Basically, I grouped data by amount, calculated min and max times and created a column called `time_diff_in_seconds` to see if these duplicates are in a range of less than one minute. I also calculated the number of ids to get only duplicated values.
The code could be improved for readability, however I would like to check with you if there are duplicated samples with more than one minute of difference between min and max and if they should be excluded as well, so a case such as:
```
ID TMSP amount
0 2019-01-01 00:01:11 89
1 2019-01-01 00:01:17 89
2 2019-01-01 00:03:17 89
3 2019-01-01 00:02:49 238
4 2019-01-01 00:03:13 238
7 2019-01-01 00:08:46 117
```
Is there cases such that? In this scenario, should we remove only ids 1 and 2?
|
looking for approaches to detecting outliers in individuals unequal sequential time series
|
Outlier detection doesn't sound like the most promising approach to me, as you have a model for the data. Some ideas you could try: use a hypothesis test to check the hypothesis that the stress values fit iid Gaussian with pre-defined standard deviation and unknown mean; use linear regression to fit a line that predicts stress as a function of time, and see if the slope is greater than zero by a statistically significant amount; etc.
|
114833
|
1
|
114839
| null |
2
|
771
|
I was doing the modeling on the House Pricing dataset. My target is to get the mse result and predict with the input variable
I have done the modeling, I'm doing the modeling with scaling the data using MinMaxSclaer(), and the model is trained with LinearRegression(). After this I got the score, mse, mae, dan rmse result.
But when I want to predict it with the actual result. It got scaled, how to predict the after result with the actual price?
Dataset:
[https://www.kaggle.com/code/bsivavenu/house-price-calculation-methods-for-beginners/data](https://www.kaggle.com/code/bsivavenu/house-price-calculation-methods-for-beginners/data)
This is my script:
```
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
from sklearn.linear_model import LinearRegression
from sklearn.metrics import mean_squared_error, mean_absolute_error
train = pd.read_csv('train.csv')
column = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
train = train[column]
# Convert Feature/Column with Scaler
scaler = MinMaxScaler()
train[column] = scaler.fit_transform(train[column])
X = train.drop('SalePrice', axis=1)
y = train['SalePrice']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15)
# Calling LinearRegression
model = LinearRegression()
# Fit linearregression into training data
model = model.fit(X_train, y_train)
y_pred = model.predict(X_test)
# Calculate MSE (Lower better)
mse = mean_squared_error(y_test, y_pred)
print("MSE of testing set:", mse)
# Calculate MAE
mae = mean_absolute_error(y_test, y_pred)
print("MAE of testing set:", mae)
# Calculate RMSE (Lower better)
rmse = np.sqrt(mse)
print("RMSE of testing set:", rmse)
# Predict the Price House by input:
overal_qual = 6
grlivarea = 1217
garage_cars = 1
totalbsmtsf = 626
fullbath = 1
year_built = 1980
predicted_price = model.predict([[overal_qual, grlivarea, garage_cars, totalbsmtsf, fullbath, year_built]])
print("Predicted price:", predicted_price)
```
The result:
```
MSE of testing set: 0.0022340806066149734
MAE of testing set: 0.0334447655149599
RMSE of testing set: 0.04726606189027147
Predicted price: [811.51843959]
```
Where the price is should be for example 208500, 181500, or 121600 with grands value in $.
What step I missed here?
|
Predict actual result after model trained with MinMaxScaler LinearRegression
|
CC BY-SA 4.0
| null |
2022-10-01T08:09:19.013
|
2022-10-03T02:51:36.183
|
2022-10-03T02:51:36.183
|
43000
|
141041
|
[
"machine-learning",
"python",
"linear-regression",
"feature-scaling",
"mse"
] |
- First, you can't use anything from the test set before training. This means that the scaling should be done using only the test set, otherwise there's a risk of data leakage.
- Then remember that scaling your features means that the model learns to predict with scaled features, therefore the test set should be passed after it has been scaled as well (using the same scaling as the training set, of course).
- Finally you could obtain the real price value by "unscaling" with inverse_transform. But instead I decided not to scale the target variable in the code below because it's not needed (except if you really want to obtain evaluation scores scaled). It's also simpler ;)
```
full = pd.read_csv('train.csv')
column = ['SalePrice', 'OverallQual', 'GrLivArea', 'GarageCars', 'TotalBsmtSF', 'FullBath', 'YearBuilt']
full = full[column]
X = train.drop('SalePrice', axis=1)
y = train['SalePrice']
# always split between training and test set first
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=15)
# Then fit the scaling on the training set
# Convert Feature/Column with Scaler
scaler = MinMaxScaler()
# Note: the columns have already been selected
X_train_scaled = scaler.fit_transform(X_train)
# Calling LinearRegression
model = LinearRegression()
# Fit linearregression into training data
model = model.fit(X_train_scaled, y_train)
# Now we need to scale the test set features
X_test_scaled = scaler.transform(X_test)
y_pred = model.predict(X_test_scaled)
# y has not been scaled so nothing else to do
# Calculate MSE (Lower better)
mse = mean_squared_error(y_test, y_pred)
print("MSE of testing set:", mse)
# Calculate MAE
mae = mean_absolute_error(y_test, y_pred)
print("MAE of testing set:", mae)
# Calculate RMSE (Lower better)
rmse = np.sqrt(mse)
print("RMSE of testing set:", rmse)
# ... evaluation etc.
```
```
|
MinMaxScaler when LSTM predictions fall outside of training range?
|
Inverse-transforming with MinMaxScaler should be capable of producing something outside of the training data's range. It seems that, in your use case, using a final activation that lands in $[0,1]$ might not be appropriate. Even if you transform the training data to land in, say, $[0,0.7]$, applying a sigmoid or some-such on the final layer seems to lack motivation.
As to the question of whether it can be done: yes, just with a little roundabout thinking. You can't specify what you want the scaler to think your data's max and min are, but you can specify the output range you want (parameter `feature_range`), which amounts to the same thing.
|
114858
|
1
|
114870
| null |
1
|
26
|
I have a dataset with bank transfer reasons. They vary a lot because humans wrote them.
From the reasons that are linked to invoice payments I need to extract several things:
- invoice number(s)
- IBAN
- counterparty
Before I use any NN algorithm I need to annotate the data.
So, for example, I have these rows:
- "Bank transfer for INV. 00234, 00435/2022.01.13 [BIC] [IBAN] Company Ltd"
- "Payment of invoice 00034-1120,34 on 02.17 [BIC] [IBAN] Company 2 inc."
In case 1, I have:
- invoice numbers: 00234, 00435
- IBAN - [IBAN]
- counterparty - Company Ltd
In case 2, I have:
- invoice number: 00034
- IBAN - [IBAN]
- counterparty - Company 2 inc
I have also annotated invoice prefixes such as inv, INV, invoice, etc.
My question is, should I add additional annotations such as "date" (2022.01.13, 02.17) or "sum paid" (1120,34)? Could they be helpful for a transformer, for example, to find out what an invoice is?
|
Should I annotate additional information besides the categories I already need in a text?
|
CC BY-SA 4.0
| null |
2022-10-03T05:54:27.240
|
2022-10-03T12:54:44.557
| null | null |
85604
|
[
"neural-network",
"lstm",
"rnn",
"transformer",
"annotation"
] |
It might depend on the algorithm you choose and on how various your data is.
Solution 1:
If every potential case is precisely identified, it could be better to classify every field precisely.
Solution 2:
However, if there are a lot of potential cases, including unexpected ones (ex: notes or chaotic order), a good solution could be to define an annotation "other" to group any other field that doesn't match the others.
Solution 3:
A mix of solutions to reduce errors as much as possible in every field:
- Use filters to recognize fields like IBAN or invoice numbers
- Use NN trained on company names to recognize companies
I'm afraid that 100% NN would lose efficiency in recognizing very different fields that include numeric and text data.
|
How to include categorical fields to enhance a text classification
|
Scikit-learn has [compose.ColumnTransformer](https://scikit-learn.org/stable/modules/generated/sklearn.compose.ColumnTransformer.html) which
>
allows different columns or column subsets of the input to be
transformed separately and the features generated by each transformer
will be concatenated to form a single feature space. This is useful
for heterogeneous or columnar data, to combine several feature
extraction mechanisms or transformations into a single transformer.
A demo of mixing numeric and categorical types is [here](https://scikit-learn.org/stable/auto_examples/compose/plot_column_transformer_mixed_types.html). In your example, `CountVectorizer` is numeric and `Label` is categorical.
|
114873
|
1
|
114878
| null |
0
|
33
|
I am new to deep learning and my understanding of how optimizers work might be slightly off. Also, sorry for a third-grader quality of images.
For example if we have simple task our loss to weight function might look like this:
[](https://i.stack.imgur.com/dr5Dy.jpg)
As far as I understand optimizers look for improvements and try to fall into the hole that it found.
But what if we have lots of local minimas, how do I know if, for example, adam optimizer have found global minima of loss, not just some local minima?
[](https://i.stack.imgur.com/fMgYV.jpg)
And the third case I can think of is what if we have a flat plateau of a loss function, except for a tiny range of weights, would it be found using adam? How do I know if it even exists?
[](https://i.stack.imgur.com/8FjVS.jpg)
Are there any tools or methods that I can use to analyse this function?
|
How do I know that my weights optimizer have found the best weights?
|
CC BY-SA 4.0
| null |
2022-10-03T13:34:56.967
|
2022-10-03T15:56:23.977
|
2022-10-03T13:46:41.373
|
141104
|
141104
|
[
"neural-network",
"loss-function",
"optimization"
] |
No optimizer can guarantee you that it has found the global minimum. That's why randomly initialize weight to start at different arbitrary points and then start descending towards the minima, hoping we might go to the global minima. Sometimes our stepsize is large enough to overshoot a valley of local minima and jump across it. It depends on the optimizer and steps size. But, in most practical applications, the minima found by these optimizers ( most likely to be local minima ) work well for our applications, but they are not necessarily the global minima.
|
Optimizers, loss functions and weights: when do they matter?
|
Yes. The optimizer and loss the are not part of serving inference.
Once you finish training, tensorflow will save the entire graph (i.e. architecture + weights).
Then, you just need to load the graph (with its weights) and provide the input for serving i.e. the feature vectors.
Once the graph is loaded it is just a function f(x) where x is the feature vector and f is the function of the graph. There is no use for the loss at this stage as the optimization process is over.
There are several ways you can provide tensorflow graph with features. One common option is with GRPC, where you feed the model with features that are organized as google protobuf structure.
You can't change loss or optimizer for the inference part, as they are only relevant to the training and optimization.
|
114889
|
1
|
114890
| null |
0
|
227
|
I always use `Linearregression()` class in sklearn library for creating a linear regression model. According to my understanding, we need feature scaling in linear regression when we use Stochastic gradient descent as a solver algorithm, as feature scaling will help in finding the solution in less number of iterations, so with `sklearn.linear_model.SGDRegressor()` we need to scale the input. However, we dont need to scale the input with `Linearregression()` as it uses the closed form solution ( based on minimizing the sum of squared residuals). So my first question is, is my understanding correct ? Now my second question is, I need to understand in details why exactly feature scaling will not help if we uses `Linearregression()` ?
|
Feature scaling in Linear Regression
|
CC BY-SA 4.0
| null |
2022-10-04T07:54:05.590
|
2022-10-06T09:06:26.720
| null | null |
141085
|
[
"scikit-learn",
"linear-regression",
"feature-scaling"
] |
@AAA,
Yes, your understanding is correct.
Answer to your second question:
- LinearRegression() uses Normal Equation i.e. closed-form solution to get best parameters for a given solution. Hence, we don’t have iterative loops to find best solution. Therefore, feature scaling is not recommended. Whereas, algorithms that use gradient decent, scaling is recommended.
|
Consequence of Feature Scaling
|
Within each class, you'll have distributions of values for the features. That in itself is not a reason for concern.
From a slightly theoretical point of view, you can ask yourself why you should scale your features and why you should scale them in exactly the chosen way.
One reason may be that your particular training algorithm is known to converge faster (better) with values around 0 - 1 than with features which cover other orders of magnitude. In that case, you're probably fine. My guess is that your SVM is fine: you want to avoid too large numbers because of the inner product, but a max of 1.2 vs. a max of 1.0 won't make much of a difference.
(OTOH, if you e.g. knew your algorithm to not accept negative values you'd obviously be in trouble. )
The practical question is whether your model performs well for cases that are slightly out of the range covered by training. This I believe can best and possibly only be answered by testing with such cases / inspecting test results for performance drop for cases outside the training domain. It is a valid concern and looking into this would be part of the validation of your model.
Observing differences of the size you describe is IMHO a reason to have a pretty close look at model stability.
|
114897
|
1
|
114914
| null |
1
|
30
|
so I try to implement an Emotion Classifier, which should detect several emotions from a text. There are several datasets for this (ISear, GoEmotions, etc.). However, a lot of them come from different domains, e.g. from Chats, Blogs, Newsarticles, etc.
My Emotion Classifier should not be limited to a domain, so I basically combined each dataset (where I only considered the emotion: anger, disgust, neutral, happy, fear) and trained my model with it. My goal is to get an Emotion Classifier which generalizes well, also maybe on unknown use cases. So everyone can use it. It is worth highlighting, that I got an accuracy from 63-67% for each dataset I used here.
Now I wanted to know is this a reasonable approach? Which challenges and disadvantages are possible? Is there a paper, which is specifically discussing this kind of topic? Or do you have another idea how I could possibly solve this differently
|
Combine datasets of different domains to ehance generalizibility
|
CC BY-SA 4.0
| null |
2022-10-04T12:59:50.257
|
2022-10-04T18:43:03.440
| null | null |
141145
|
[
"machine-learning",
"deep-learning",
"nlp",
"dataset"
] |
This sounds reasonable indeed, but I would suggest to verify this experimentally:
Since you have access to multiple heterogeneous datasets, I think a good way to evaluate the ability of the model to generalize would be to train on all the datasets but one, and then evaluate on the remaining dataset. Then preferably repeat with every dataset as test set in order to account for chance (similarly to cross-validation). To test whether the hypothesis works, you should also train baseline models using only one dataset and compare their performance on the same test sets.
|
Combining Datasets with Different Features
|
You can use [R](http://www.r-project.org/) to do that.
[The smartbind function](http://www.inside-r.org/packages/cran/gtools/docs/smartbind) is the perfect way to combine datsets in the way you are asking for:
```
library(gtools)
d1<-as.data.frame(rbind(c(1,7,3),c(4,8,4))))
names(d1)<-c("featureA","featureB","featureC")
d2<-as.data.frame(rbind(c(3,4,5,6),c(9,8,4,6)))
names(d2)<-c("featureA","featureC","featureD","featureE")
d3<-smartbind(d1,d2)
```
|
114899
|
1
|
114906
| null |
0
|
43
|
I have a raw unlabeled dataset, and I want to design a model to perform a regression. In my dataset, it does not make sense to give each observation a value, but it does make sense to sort them. Can I implement an algorithm to create values for each observation by sorting them?
I thought about this:
- Select N random observations and sort them
- Give each observation a new score, equal to its position
- Calculate the score of an observation as the average position across all times the observation was picked
- return to step 1
Does it make sense? Is there any machine learning branch that studies this kind of scenarios?
|
Transform dataset to regression problem by sorting?
|
CC BY-SA 4.0
| null |
2022-10-04T13:47:18.993
|
2022-10-04T15:57:04.623
|
2022-10-04T14:02:54.410
|
141138
|
141138
|
[
"machine-learning",
"regression",
"dataset",
"active-learning"
] |
There is a field for this called ordinal regression. Does each unique observation have it's own rank or can observations share a rank? IE. if you have 10 elements, are they labeled 1,2,3 ... 10? or could it be 1,1,1, 2,2,2,3,3,3,4? can't post as comment due to karma
What are these values supposed to represent? Why are you doing this analysis?
|
How to order the data with respect to data type
|
Let your data frame be `df`. First get the numeric columns:
```
num_col = df.select_dtypes('number').columns
```
Then get the remaining columns.
```
non_num_col = set(df.columns) - set(df.select_dtypes('number').columns)
```
Merge as required.
```
df = pd.concat([df[num_col], df[list(non_num_col)]], axis=1)
```
The columns are now in the desired sequence.
|
114911
|
1
|
114919
| null |
3
|
61
|
In the documentation of `Logisticregression()` offered by sklearn library, it states the following note:
>
The underlying C implementation uses a random number generator to
select features when fitting the model. It is thus not uncommon, to
have slightly different results for the same input data. If that
happens, try with a smaller tol parameter.
I have two questions regarding this note :
- What is the meaning of
>
The underlying C implementation uses a random number generator to
select features when fitting the model
- What is tol parameter?
|
Logistic Regression using Logisticregression() class
|
CC BY-SA 4.0
| null |
2022-10-04T17:09:11.517
|
2022-10-05T11:27:28.517
| null | null |
87037
|
[
"scikit-learn",
"logistic-regression"
] |
The Note you reference was added back when the only solver available in `LogisticRegression` was `LIBLINEAR`, and that solver uses coordinate descent: coordinates are examined and adjusted individually, iteratively. The order in which that happens apparently is based on a random number generator.
See also
[https://stats.stackexchange.com/q/327225/232706](https://stats.stackexchange.com/q/327225/232706)
[https://stackoverflow.com/q/38640109/10495893](https://stackoverflow.com/q/38640109/10495893)
Probably that note doesn't apply to all the newer solvers, and ought to be clarified.
As for `tol`, it is the tolerance criterion for convergence: when the updates to be made are smaller than `tol`, we say that's good enough and stop iterating. What exactly "the updates" being measured are used may also depend on the solver. See e.g.
[https://stats.stackexchange.com/a/255380/232706](https://stats.stackexchange.com/a/255380/232706)
[https://github.com/scikit-learn/scikit-learn/issues/22243](https://github.com/scikit-learn/scikit-learn/issues/22243)
[https://github.com/scikit-learn/scikit-learn/issues/11536](https://github.com/scikit-learn/scikit-learn/issues/11536)
|
Re: Logistic Regression
|
This kind of problem is call Data Imbalance issue, this is a very common issue in Financial Industry, Health Care Industry(Cancer Cell Detection) like Banks and Insurance (for Fraud Detection)
To overcome such issues, we use different techniques like Over-sampling or Under-sampling.
Over-sampling tries to increase that minority records by duplicating those records to make balance in the data
Under-sampling tries to decrease the majority records by removing some records which are not significant to make balance in the data.
There are different algorithms for implementing the same.
you can go through these [Link-1](https://datascience.stackexchange.com/questions/24610/smote-and-multi-class-oversampling/24664#24664),[Link-2](https://datascience.stackexchange.com/questions/24905/best-methods-to-solve-class-imbalance-problem-and-why/24912#24912), for Explanation and Implementation of the same.
Let me know if you need anything else.
|
114930
|
1
|
115001
| null |
1
|
297
|
I am trying to come up with a solution to this for an exam preparation but cant come up with anything, dont know how to tackle it... if i use information gain the depth increases beyond 2.
What would be an preferred strategy for tackling this?
[](https://i.stack.imgur.com/rBgcB.png)
|
Draw a decision tree with depth 2 that is consistent with the data
|
CC BY-SA 4.0
| null |
2022-10-05T11:43:51.770
|
2022-10-10T14:45:10.853
|
2022-10-09T00:53:58.680
|
71297
|
141193
|
[
"machine-learning",
"decision-trees"
] |
I wont give whole answer but I think if you start out thinking about it like this it might help.
Lets call the levels L0, L1, L2. When we split lets call the 0 split left and 1 split right. Lets number the rows [1,...,8].
Consider:
L0
All samples [1,...,8]
Split by a_1 -->
L1
Left (a_1=0): [1,2,3,4]
Right (a_1=1): [5,6,7,8]
Split L1 Left by a_x (where a_x is a variable that is not a_1) -->
Split L1 Right by a_y (where a_
y is a variable that is not a_1 or a_x) -->
L2
(L1 Left) Left (a_x=0): [1,4] --> [-,-]
(L1 Left) Right (a_x=1): [,] --> [,]
(L1 Right) Left (a_y=0): [,] --> [,]
(L1 Right) Right (a_y=1): [,] --> [,]
With the correct a_y and a_x there will be a perfect split.
|
What are the factors to consider when setting the depth of a decision tree?
|
Yes, but it also means you're likely to overfit to the training data, so you need to find the value that strikes a balance between accuracy and properly fitting the data. Deciding on the proper setting of the `max_depth` parameter is the task of the tuning process, via either Grid Search or Randomised Search with cross-validation.
This page from the scikit-learn documentation explains the process well: [https://scikit-learn.org/stable/modules/grid_search.html](https://scikit-learn.org/stable/modules/grid_search.html)
|
114932
|
1
|
114941
| null |
0
|
86
|
What are the advantages of using different tokenizers? For example, let's take the sentence:
"In Düsseldorf I took my hat off. But I can't put it back on."
The treebank tokenizer yields: "In Düsseldorf I took my hat off . But I ca n't put it back on . "
However, the whitespace tokenizer would yield:
"In Düsseldorf I took my hat off . But I can't put it back on . "
NLTK has four tokenizers:
- TreebankWordTokenizer
- WordPunctTokenizer
- PunctWordTokenizer
- WhitespaceTokenizer
When should you use which one? For my project, I am interested in text generation, so I am leaning toward the whitespace tokenizer. Is this a good choice? Won't my model generate nonsense tokens like "n't" when I use eg the treebank tokenizer?
|
Advantages of different tokenizers for NLP (specifically text generation)
|
CC BY-SA 4.0
| null |
2022-10-05T12:43:02.250
|
2022-10-05T16:58:46.107
| null | null |
141192
|
[
"nlp",
"text-generation",
"tokenization"
] |
The problem is of text generation. I am assuming you are trying for chatbot etc where input is a natural lanugae and output is a natural language.
Since input is a natural language, all punctuations,special characters are important. For eg: Triple dot also means " to follow up" or "waiting". A tokenizer based on "." will remove this information.
Next step is to choose tokenizer which preserves punctuations. Tokenizer based on white space will do.
|
NLP: what are the advantages of using a subword tokenizer as opposed to the standard word tokenizer?
|
Subword tokenization is the norm nowadays in NLP models because:
- It mostly avoids the out-of-vocabulary (OOV) word problem. Word vocabularies cannot handle words that are not in the training data. This is a problem for morphologically-rich languages, proper nouns, etc. Subword vocabularies allow representing these words. By having subword tokens (and ensuring the individual characters are part of the subword vocabulary), makes it possible to encode words that were not even in the training data. There's still the problem with characters not present in the training data, but that's tolerable in most of the cases.
- It gives manageable vocabulary sizes. Current neural networks need a pre-defined closed discrete token vocabulary. The vocabulary size that a neural network can handle is far smaller than the number of different words (surface forms) in most normal languages, especially morphologically-rich ones (and especially agglutinative ones).
- Mitigates data sparsity. In a word-based vocabulary, low-frequency words may appear very few times in the training data. This is especially troublesome for agglutinative languages, where a surface form may be the result of concatenating multiple affixes. Using subword tokenization allows token reusing, and increases the frequency of their appearance.
- Neural networks perform very well with them. In all sorts of tasks, they excel: neural machine translation, NER, etc, you name it, the state of the art models are subword-based: BERT, GPT-3, Electra,...
|
114946
|
1
|
114954
| null |
1
|
49
|
I am working on building a time series model, but the dataset I have only has date features for the year; the month and date are not available. What would be a suitable model to use and is it even possible? The dataset is an Excel file from different years I have merged into a single sheet with records for each year arranged in alphabetical order.
[](https://i.stack.imgur.com/k3M7Q.png)
|
Building a Time Series Model
|
CC BY-SA 4.0
| null |
2022-10-05T20:56:30.943
|
2022-10-06T07:38:19.340
| null | null |
141219
|
[
"python",
"time-series",
"predictive-modeling",
"feature-engineering"
] |
The first question about applying time-series models is whether you can detect some patterns by yourself or not.
For instance, if you find that the values rise every last week of the month, then you might expect time-series models to be useful.
But if your data is too coarse to extract anything interesting because of external and unpredictable impacts (ex: economic crisis, accident in several stations, etc.) the added value would be too limited.
In addition, you should gather enough data to teach the model patterns. Consequently, if you only have weekly data, you should have at least 3 years (~150 records) to teach the model some patterns due to seasonality and the impact of special events (ex: stock market crash).
You could add some external data like weather or other products associated with your values. This would improve the model prediction quality.
|
Predicting time series data
|
Best approach would be to perform data preparation first:
- Remove features (columns) with no variance in it (you could use: sklearn feature_selection)
- one-hot-encoding of categorical features
- insert a lag column of -t steps
If you have more than one explanatory variable, the process is called multiple linear regression. Instead of using a regression model you could also use other learners like XGBoost or LSTMs
|
114958
|
1
|
114969
| null |
1
|
122
|
For a research project, I'm planning to use an LSTM to learn from sequences of KG entities. However, I have little experience using LSTMs or RNNs in general. During planning, a few questions concerning feature engineering have come up.
Let me give you some context:
My initial data will be a collection of $n$ texts.
From these texts, I will extract $n$ sequences of entities of variable length using a DBPedia or Wikidata tagger. Consequently, I'll have $n$ sequences of KG entities that somehow correspond to their textual counterparts.
Most LSTM implementations I've seen take only one type of feature as input. However, as we're dealing with knowledge graphs, we have access to more types of information. I'm wondering what would be a good strategy to use more than just one type of feature.
## Objective
Given a sequence of seen entities, I want the model to predict the continuation of that sequence. A set of truncated sequences from the corpus will be kept apart. The beginnings will serve as prompts and the endings will be truth values for evaluation.
I'm also interested in the model's prediction probabilities when predicting following entities for one single entity given as a prompt.
## Assumptions
I assume that diverse types of features will help the model make good predictions. Specifically, I want the model to learn not only from entity sequences but also from KG 'metadata' like associated RDF classes or pre-computed embedding vectors.
## Features
### Feature 1: Numerical vocabulary features
The simplest case I can think of is to create an orderet set from all extracted entities.
For example, if the extracted entities from all my documents were `[U2, rock, post-punk, yen, Bono, revolutionary, guitar]` (in reality that'll probably be a few thousands more), I'd create this ordered set representing my vocabulary:
```
{1: http://dbpedia.org/resource/U2, 2: http://dbpedia.org/resource/Rock_music, 3: http://dbpedia.org/resource/Post-punk, 4: http://dbpedia.org/resource/Japanese_yen, 5: http://dbpedia.org/resource/Bono, 6: http://dbpedia.org/resource/Revolutionary, 7: http://dbpedia.org/resource/Acoustic_guitar}
```
The training data for the LSTM would then be sequences of integers such as
```
training_data = [
# Datapoint 1
[[1, 2, 3, 4, 5, 6, 7]], #document 1
# Datapoint 2
[[5, 3, 3, 1, 6]], #document 2
# Datapoint 3
[[2, 4, 5, 7, 1, 6, 2, 1, 7]], #document 3
...]
```
### Feature 2: Numerical class features
I want to include additional information about RDF classes. Similar to the approach in Feature 1, I could create an ordered set containing all possible classes. However, the difference is that each entity belongs to one or more classes
If all classes extracted were
```
{1: owl:Thing, 2: dbo:MusicGenre, 3: dbo:Agent, 4: dbo:Person, 5: dbo:PersonFunction}
```
I would create a new data structure for each data point, this time containing class information. The notation represents `{entity: [classes]}`. My training data could then look something like this:
```
training_data = [
# Datapoint 1
[
[1, 2, 3, 4, 5, 6, 7], # feature 1
{1: [1,2,4], 2: [2,3,4,5], ..., 7: [3,5]} # feature 2
],
# Datapoint 2
[
[5, 3, 3, 1, 6], # feature 1
{1: [2,3,4], 2: [1,2,4,5], ..., 5: [3,5]} # feature 2
],
# Datapoint 3
[
[2, 4, 5, 7, 1, 6, 2, 1, 7], # feature 1
{1: [1,2,4], 2: [1,2,3,5], ..., 9: [2,3]} # feature 2
],
...]
```
### Feature 3: RDF2Vec embeddings
Each KG entity from a collection of entities can be mapped into a low-dimensional space using tools like RDF2Vec. I'm not sure whether to use this feature or not as its latent semantic content might interfere with my research question, but it is an option.
Embedding features, in this case, are vectors of length 200:
```
embedding_vector = tensor([5.9035e-01, 2.6974e-01, 8.6569e-01, 8.9759e-01, 9.3032e-01, 5.2442e-01, 9.6031e-01, 1.8393e-01, 6.3000e-01, 9.5930e-01, 2.5407e-01, 5.6510e-01, 8.1476e-01, 2.0864e-01, 2.7643e-01, 4.8667e-02, 9.3791e-01, 8.0929e-02, 5.0237e-01, 1.4946e-01, 5.9263e-01, 4.7912e-01, 6.8907e-01, 4.8248e-03, 4.9926e-01, 1.5715e-01, 7.0777e-01, 6.0065e-01, 2.6858e-01, 7.2022e-01, 4.4128e-01, 4.5026e-01, 1.9987e-01, 2.8191e-01, 1.2493e-01, 6.0253e-01, 6.9298e-01, 2.5828e-01, 2.8332e-01, 9.6898e-01, 4.5132e-01, 4.6473e-01, 8.0197e-01, 8.4105e-01, 8.8928e-01, 5.5742e-01, 9.5781e-01, 3.8824e-01, 4.6749e-01, 4.3156e-01, 2.8375e-03, 1.5275e-01, 6.7080e-01, 9.9894e-01, 7.2093e-01, 2.7220e-01, 8.5404e-01, 6.9299e-01, 3.9316e-01, 8.9538e-01, 8.1654e-01, 4.1633e-01, 9.6143e-01, 7.1853e-01, 9.5498e-01, 4.5507e-01, 3.6488e-01, 6.3075e-01, 8.0778e-01, 6.3019e-01, 4.4128e-01, 7.6502e-01, 3.2592e-01, 9.5351e-01, 1.1195e-02, 5.6960e-01, 9.2122e-01, 3.3145e-01, 4.7351e-01, 4.5432e-01, 3.7222e-01, 4.3379e-01, 8.1074e-01, 7.6855e-01, 4.0966e-01, 2.6685e-01, 2.4074e-01, 4.1252e-01, 1.9881e-01, 2.2821e-01, 5.9354e-01, 9.8252e-01, 2.7417e-01, 4.2776e-01, 5.3463e-01, 2.9148e-01, 5.8007e-01, 8.2275e-01, 4.8227e-01, 8.5314e-01, 3.6518e-01, 7.8376e-02, 3.6919e-01, 3.4867e-01, 8.9571e-01, 2.0085e-02, 7.9924e-01, 3.5849e-01, 8.7784e-01, 4.6861e-01, 6.2004e-01, 6.8465e-01, 4.1273e-01, 4.2819e-01, 9.4532e-01, 2.2362e-01, 8.3943e-01, 1.1692e-01, 6.9463e-01, 7.6764e-01, 2.8046e-02, 6.9382e-01, 9.2750e-01, 3.6031e-01, 6.8065e-01, 1.6976e-01, 8.2079e-01, 6.4580e-01, 8.3944e-01, 3.9363e-01, 4.4026e-01, 4.4569e-01, 8.2344e-01, 5.4172e-01, 1.6886e-04, 3.8689e-01, 5.8966e-01, 1.9510e-02, 2.5976e-01, 4.0868e-01, 3.1406e-01, 3.6334e-01, 6.1768e-01, 5.4854e-01, 4.1273e-01, 7.2670e-04, 2.4486e-01, 4.1042e-01, 9.0760e-01, 1.6224e-01, 7.4019e-02, 8.1329e-01, 7.2573e-01, 8.2816e-01, 7.3032e-01, 6.6017e-01, 6.4281e-01, 4.1839e-01, 9.2251e-01, 1.5183e-02, 4.4538e-01, 9.7205e-01, 9.5677e-01, 9.5649e-01, 1.2610e-01, 9.2521e-01, 3.2649e-01, 2.1019e-02, 2.5695e-01, 4.2663e-01, 9.2064e-01, 4.5242e-01, 7.0447e-01, 8.1233e-01, 2.7507e-01, 2.4744e-01, 1.3670e-01, 6.4032e-01, 5.8332e-01, 5.5130e-01, 2.4997e-02, 7.7206e-01, 1.5085e-01, 2.8028e-01, 8.2839e-01, 5.8292e-01, 9.9087e-01, 6.0233e-01, 4.1489e-01, 6.4902e-01, 7.5428e-01, 8.0953e-01, 3.7530e-01, 4.8196e-01, 1.8786e-01, 9.8463e-01, 6.3303e-01, 4.8519e-01, 7.6163e-01, 3.3821e-01]
```
If I included this in my training data, it would look something like this:
```
training_data = [
# Datapoint 1
[
[1, 2, 3, 4, 5, 6, 7], # feature 1
{1: [1,2,4], 2: [2,3,4,5], ..., 7: [3,5]}, # feature 2
[7 embedding vectors], # feature 3
],
# Datapoint 2
[
[5, 3, 3, 1, 6], # feature 1
{1: [2,3,4], 2: [1,2,4,5], ..., 5: [3,5]}, # feature 2
[5 embedding vectors], # feature 3
],
# Datapoint 3
[
[2, 4, 5, 7, 1, 6, 2, 1, 7], # feature 1
{1: [1,2,4], 2: [1,2,3,5], ..., 9: [2,3]}, # feature 2
[9 embedding vectors], # feature 3
],
...]
```
## Questions
My training data will consist of lists of variable length and matrices/tensors. How do I best feed this data to the model? In any case, I'm interested in predicting only entities. Training only on feature 1 could be a baseline that I compare to combinations of features, e.g. Features 1+2 or 1+3 or 1+2+3
Based on what I've read until now, I think I'm going to use padding and masking. However, I'm not sure what my features should finally look like.
I appreciate any kind of feedback.
Thanks for sharing your thoughts!
|
LSTM Feature engineering: using different Knowledge Graph data types
|
CC BY-SA 4.0
| null |
2022-10-06T09:54:10.930
|
2022-10-07T12:16:03.913
|
2022-10-07T12:16:03.913
|
141191
|
141191
|
[
"deep-learning",
"nlp",
"lstm",
"feature-engineering",
"knowledge-graph"
] |
As general points:
- Multivariate RNN: You can use multiple sequential features as an input to your recurrent layers. Taking pytorch as a reference, you can see that the input of LSTM object is a tensor of shape $(L, H_{in})$ or $(L, N, H_{in})$ for batched, where $L$ is the length of your sequences whereas $H_{in}$ is the number of input features. In this approach, you can leave mapping tokens to a vocabulary as part of the standard procedure of a standard embedding being learnt.
- You may be able to use a multi-label approach (as opposed to multi-class), if I understand your question correctly.
- Multimodal learning: If features related to embeddings can be considered static/not evolving over time, you may want to add a second auxiliary port to your network, to specifically model this data type. This second part would consist of a feed-forward network with fully connected layers. The fixed-length vector representations / embeddings at the outputs of your RNN and FFN modules could get concatenated before passed to your classification layer. In this way you allow the model to reason from a joint representation of both data modalities.
Hope it helps.
|
Multiple features in LSTM
|
In the case where there are multiple features, the LSTM processes each feature independently, with its own set of weights and biases. The LSTM uses the same computational graph for each feature, but with different parameters (i.e. weights and biases) for each feature.
At each timestep, the LSTM takes in the current values of all features as input, processes them individually using the same computational graph, and produces a hidden state for each feature. The hidden states for all the features at a given timestep are then concatenated and used as input for the next timestep.
In this way, the LSTM is able to capture the relationships between different features across time. The final output of the LSTM will have the same dimensions as the hidden states (i.e. number of units), but will incorporate information from all the input features.
|
114959
|
1
|
114989
| null |
3
|
1476
|
I'm working a multi-class text classification project.
After splitting the dataset into train and test datasets, I've applied the below function on the train dataset (AKA pre processing):
```
STOPWORDS = set(stopwords.words('english'))
def clean_text(text):
# lowercase text
text = text.lower()
# delete bad symbols
text = re.sub(r"(@\[A-Za-z0-9]+)|([^0-9A-Za-z \t])|(\w+:\/\/\S+)|^rt|http.+?", "", text)
# delete stopwords from text
text = ' '.join(word for word in text.split() if word not in STOPWORDS)
# Stemming the words
text = ' '.join([stemmer.stem(word) for word in text.split()])
return text
```
To my surprise, I've got much worst results (i.e. va_accuracy) applying on the train dataset rather than just "do nothing" (59% vs 69%)
I've literally commented out the apply line in the below section:
```
all_data = dataset.sample(frac=1).reset_index(drop=True)
train_df, valid = train_test_split(all_data, test_size=0.2)
train_df['text'] = train_df['text'].apply(clean_text)
```
What am I missing?
How can it be that pre processing steps decreased accuracy?
A bit more info
I forgot to mention I'm using the below to tokenize the text:
```
X_train = train.iloc[:, :-1]
y_train = train.iloc[:, -1:]
X_test = valid.iloc[:, :-1]
y_test = valid.iloc[:, -1:]
weights = class_weight.compute_class_weight(class_weight='balanced', classes=np.unique(y_train),
y=y_train.values.reshape(-1))
le = LabelEncoder()
le.fit(weights)
class_weights_dict = dict(zip(le.transform(list(le.classes_)), weights))
tokenizer = Tokenizer(num_words=vocab_size, oov_token='<OOV>')
tokenizer.fit_on_texts(X_train['text'])
train_seq = tokenizer.texts_to_sequences(X_train['text'])
train_padded = pad_sequences(train_seq, maxlen=max_length, padding=padding_type, truncating=trunc_type)
validation_seq = tokenizer.texts_to_sequences(X_test['text'])
validation_padded = pad_sequences(validation_seq, maxlen=max_length, padding=padding_type, truncating=trunc_type)
```
Later on I'm fitting all into the model as follows:
```
model = Sequential()
model.add(Embedding(vocab_size, embedding_dim, input_length=train_padded.shape[1]))
model.add(Conv1D(48, len(GROUPS), activation='relu', padding='valid'))
model.add(GlobalMaxPooling1D())
model.add(Dropout(0.5))
model.add(Flatten())
model.add(Dropout(0.5))
model.add(Dense(len(GROUPS), activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
epochs = 100
batch_size = 32
history = model.fit(train_padded, training_labels, shuffle=True ,
epochs=epochs, batch_size=batch_size,
class_weight=class_weights_dict,
validation_data=(validation_padded, validation_labels),
callbacks=[ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.0001),
EarlyStopping(monitor='val_loss', mode='min', patience=2, verbose=1),
EarlyStopping(monitor='val_accuracy', mode='max', patience=5, verbose=1)])
```
|
Accuracy is getting worse after text pre processing
|
CC BY-SA 4.0
| null |
2022-10-06T11:50:40.343
|
2022-10-11T14:43:23.383
|
2022-10-06T19:55:59.093
|
109113
|
109113
|
[
"machine-learning",
"python",
"nlp",
"scikit-learn",
"text-classification"
] |
You have to apply the same preprocessing to the test data.
Based on your code you apply the clean_text function only to train data but then predict on test/validation data that was not cleaned. That means that your model learns on clean data but you want it to predict on raw data which contains things the model never seen (because it was removed from the train dataset) which will result in worse performance.
Edit after discussion in comments:
You can either preprocess all data at the same time before splitting and have
```
all_data = dataset.sample(frac=1).reset_index(drop=True)
all_data['text'] = all_data['text'].apply(clean_text)
train_df, valid = train_test_split(all_data, test_size=0.2)
```
or just apply the sample preprocessing to the valid dataset after you split the data
```
all_data = dataset.sample(frac=1).reset_index(drop=True)
train_df, valid = train_test_split(all_data, test_size=0.2)
train_df['text'] = train_df['text'].apply(clean_text)
valid ['text'] = valid ['text'].apply(clean_text)
```
It is important to apply the same preprocessing for all data that goes into the model so all input is in the same format. This means that when you deploy your model into application you will also have to apply the same preprocessing to any text input coming into the model before making prediction.
|
Accuracy over 100%
|
I solved in this way:
```
#original
#pred = output.argmax(dim=1,keepdim=True)
#my solution
_, pred = torch.max(output, dim=1)
```
I do not know why, buy my solution it works. If someone has an intuition can explain me why this works? Thanks
|
114977
|
1
|
115109
| null |
0
|
57
|
I have data of the format `[dataset_N, dataset_N+1, similarity]` where two different datasets have been compared, resulting in a similarity score. A score is generated for each possible (unique) pair of datasets (~7e6 pairs).
I am trying to determine how to cluster these datasets using just the similarity scores.
Something like K-means can't be used directly because the available information (similarity scores and which pair was compared) doesn't match its inputs.
Is there an existing method that can be used for this?
Edit: reworded for clarity
Edit 2: removed references to labels, since that was causing confusion
|
Given similarity scores of datasets, find dataset clusters?
|
CC BY-SA 4.0
| null |
2022-10-07T00:32:36.240
|
2022-10-11T07:38:16.057
|
2022-10-10T18:44:03.207
|
141263
|
141263
|
[
"clustering"
] |
Thanks to the clues from the comments and this [stack exchange post](https://datascience.stackexchange.com/questions/4974/partitioning-weighted-undirected-graph) outlining graph clustering and modularity scores I was able to find a suitable method.
I went with the [NetworkX implementation](https://networkx.org/documentation/stable/reference/algorithms/generated/networkx.algorithms.community.louvain.louvain_communities.html#networkx.algorithms.community.louvain.louvain_communities) of the Louvain Communities algorithm.
|
identify similarities in a dataset
|
In general identifying similarities is done with clustering, but in this case what you're looking for is a potential pattern in the data leading to a missing value in a specific column, right? So I would try to train a decision tree using all the columns except the target one as features, and using a binary class indicating whether the target column has a missing value or not. Visualizing the decision tree obtained after training should show the pattern if there is one.
|
114990
|
1
|
115081
| null |
0
|
97
|
I am dealing with a real-data multiclass classification problem. The task is to classify the kind of fault for some equipments.
Features in input are equipments' alarms (X), the target is the fault class (y). For each sample, in addition to alarms, the equipment's ID as long as Timestamp when sample was collected are reported. Note that these two features will not be used for prediction. Original data is made out of 1000 samples.
My idea was to remove duplicates before training a predictive model. Hence, i dropped 50 samples for which values of X, ID, Timestamp and y were identical. But then I realized that considering only X and y the number of duplicates to drop increase to 450. So, I am pretty sure that the "first drop" (removing identical rows) is correct, but I am not sure whether to drop or not all the other sample with the "second drop".
As i told you, the data are from real world and (at least for now) ID and Timestamp are not part of features used for prediction.
Here it is a sample of my data (ID and Timestamp are not displayed):
```
X1 X2 X3 ... X100 Y
1 0 1 ... 1 2
1 1 0 ... 0 3
1 0 1 ... 1 2
... ... ... ... ... ...
0 1 1 1 1
```
Shold i drop duplicates or not, in this specific case?
|
Should I drop duplicates or not?
|
CC BY-SA 4.0
| null |
2022-10-07T12:20:31.380
|
2022-10-10T16:52:12.153
|
2022-10-10T08:44:24.987
|
141247
|
141247
|
[
"machine-learning",
"data-cleaning",
"multiclass-classification"
] |
>
Should I drop duplicates or not, in this specific case?
Yes, it is like oversampling your data.
The model will be biased towards the duplicated Class although it has no new information.
In the best case, it will not harm but it will never help.
|
Effect of removing duplicates on Random Forest Regression
|
Your model will become less accurate.
For example, let's say you have features A and B, and you have 51 observations. For 50 of those A=10 and B=20 correspond to dependent value of 5, and you have 1 observation for which A=10 and B=20 correspond to dependent value of 100.
Without removing duplicates when making a prediction for a new observation with A=10 and B=20, Random Forest will give roughly the average of 51 values mentioned above, which is close to 6.86. If you remove duplicates you will get an average of 5 and 100 or 52.5.
And assuming your test data has the same distribution as the original data, the model will be far off on many observations. Therefore, unless you have a good reason to believe that the test data will have a different distribution, don't remove the duplicate values.
|
114994
|
1
|
115007
| null |
0
|
79
|
For a predictive model with binary target, we can assess the predictive power of each predictor by calculating their information value. What is the equivalent of IV when the target variable is continuous? Should I look at $R^2$ value of individual predictor when used as a sole predictor in a regression model?
Edited:
I have 4300 attributes to choose from. So adding all attributes would be impossible. I want to sort the attributes based on their individual predictive power and add a short list of 50 attributes to my model.
|
How to assess predictive power of each predictor in case of linear regression
|
CC BY-SA 4.0
| null |
2022-10-07T14:53:09.823
|
2022-10-08T02:05:34.200
|
2022-10-07T16:54:31.330
|
118390
|
118390
|
[
"regression"
] |
TLDR: If you do need to do feature selection then yes you would use R^2. Look up forward and backward feature selection. Another good option is to use L1 regularization. I would probably try L1 first if you're able to run the model with all the features. I am guessing you wont want to do L1 regularization or backward selection because you are trying to avoid running the regression with all variables? If that is the case I would recommend using forward selection and just stop at whatever num variables you want your model to end up having. It is better to do forward selection than simply finding R^2 for every X with the outcome.
---
It sounds like your question is about how to interpret your model. However when I read the comments, it seems like you are asking about how to reduce the number of variables for the model.
Based on assuming that you are trying to ask about using less variables. The next question I would ask is what is the goal for reducing the number of attributes? If it is for regularization (to prevent overfitting) then the person that commented about variable selection being unstable is correct. You can do cross validation with L1 or L2 regularization, or both (elastic net). Furthermore, if you do L1 regularization, you can also use this as another way to do feature selection because it will assign coefficients to the variables with 0 or 1.
If you are trying to reduce the number of variables because you want to reduce computational complexity to reduce training time or use less memory then you can do feature selection or dimensionality reduction. If you do feature selection you are picking the features that have the best predictive power. There are different ways to do feature selection (ex: forward selection, backward selection). You can also do dimensionality reduction (ex: PCA) which will create "new features" AKA principal components out of the original features which are linear combinations of the original features. These principal components do not have anything to do with the outcome variable (unlike feature selection and regularization). The top principal components are the combos of the original features that provide the most information.
|
Predictive power of a dataset
|
Usually predictive power refers to the model, rather than the data. I've occasionally seen some people use it in the way that the author of your book uses it (see [this for example](https://ieeexplore.ieee.org/document/5616710)).
In the context of your book, yes, predictive power refers to whether input can be mapped to target output $X\rightarrow Y$. We can infer a dataset's "predictive power" by trying to model it (e.g. linear regression). If the model performs poorly, then there are two possibilities as the book says: either the dataset is not predictive (i.e. it does not offer a clean mapping from input to target output) or the methods we are using are unsuitable to model the mapping.
---
Some examples of both situations:
- If you generated random data for $X$ and $Y$, the resulting dataset would (probably) have no predictive power as no model could reasonably generalize the mapping $X\rightarrow Y$.
- If you have a nonlinear mapping, then linear regression would not fit it well. For example, if our dataset was such that $y_1$ is mapped to by $||\vec{x}||<\alpha $ and all other inputs map to $y_2$, then our dataset is extremely predictive, but our linear regression model cannot fit it (since the mapping is nonlinear). In this toy example, it's easy to see the predictive power of the dataset, particularly if the input is in 2D/3D since we could just plot it. However, manually observing such trends in highly dimensional space using actual data can be very difficult, hence we use the tools that you are learning to help interpret the data. Also, when there's nonlinearity, it's difficult to statistically evaluate the dataset itself. Variables with linear relationships are simple to correlate (e.g. Pearson's correlation coefficient) but nonlinearities can make correlation difficult. I assume that this is why your book defers to vague terminology as it's probably for pedagogical, rather than pedantic, purposes. After all, it gets the point across without needing to discuss the ongoing research into quantifying nonlinear correlations.
|
115010
|
1
|
115025
| null |
0
|
31
|
In the SVM classification, we use planes to classify the labels points if the dataset has 3 input features. We need to use planes when input features are 3. I am describing a toy dataset with 3 input features as follows
```
Study_time rest_time pass_time label
40 10 5 Good
38 12 3 Good
20 8 10 bad
15 12 2 bad
```
In this dataset you can see, three input features are study_time, rest_time, pass_time. We need to define a plane to find out the label. I went through various course materials of support vector machines and every material said that we need to define points in 3D space if the number of features is 3.
I know points mean a dot which has an x co-ordinate and a y-co-ordinate.
In toy dataset, every instance has 3 values, 1 for study times, 1 for rest_time, 1 for pass_time. 1st instance can be defined as (40, 10, 5).
If we consider the points with respect to the toy dataset, which are those points? what are there co-ordinates?
Thank you.
|
Finding points in 3D space - SVM
|
CC BY-SA 4.0
| null |
2022-10-08T05:56:27.847
|
2022-10-08T18:44:46.430
| null | null |
63745
|
[
"classification",
"data",
"svm"
] |
By points they mean x,y and z components. Not just an x and a y.
In the same way that you need two points to uniquely define a line, you need 3 "points" to define a plane.
The points in the dataset are simply the numerical values for each row: (40,10,5),..., (15,12,2).
|
Support Vectors of SVM
|
Yes, they are support vectors. This is because they contribute (or serve as support) in the computation of the hyperplane that separates the positive and negative regions. This hyperplane is given by:
$$\mathbf{w}^T x + b = 0 $$
Where $x$ is our $\text{n-dimensional}$ input vector that contains the $n$ features of our problem (the input samples during the learning process), and $b$ and $\mathbf{w} = (w_1,w_2,...,w_n)^T$ are the parameters that are optimized. More concretely, $b$ is the intercept of the hyperplane and $(w_1,w_2,...,w_n)^T$ is its normal vector (justification in [Mathworld](https://mathworld.wolfram.com/Plane.html)).
But why the misclassified points contribute?
More detailed justifications can be found in the [Andrew Ng notes about SVM](http://cs229.stanford.edu/notes/cs229-notes3.pdf) which I recommend.
To understand why they contribute, we need to have a look at the cost function, $J$, that is used when the data is not linearly separable, like the case of the question (this cost function is also used to prevent the influence of outliers):
$$J = \frac{1}{2}\Vert \mathbf{w}\Vert^2 + C \sum_{i=1}^m \xi_i$$
$$\text{subject to}\begin{cases}
y^{(i)}(\mathbf{w}^Tx^{(i)}+b)\geq 1-\xi_i, \,\,\,\,\,\,\,\,\,\, i = 1,...,m\\
\xi_i \geq 0
\end{cases}$$
Where $\xi_i$ is the slack of the input sample $x_i$, being $\xi_i = 0$ only when the input sample $x_i$ is correctly classified and presents a functional margin $\geq1$.
In order to solve this in a efficient way, it can be proved that by applying the Lagrange duality to the problem presented before (minimizing $J$ w.r.t. $\mathbf{w}$ and $b$), we end up with a equivalent problem of maximizing the next function w.r.t. $\alpha$:
$$ \max_{\alpha}\,\,\,\, \sum_{i=1}^m\alpha_i - \frac{1}{2}\sum_{j=1}^m\sum_{i=1}^my^{(i)}y^{(j)}\alpha_i\alpha_jx_ix_j$$
$$\text{subject to}\begin{cases}
0\leq \alpha_i\leq C, \,\,\,\,\,\,\,\,\,\, i = 1,...,m\\
\sum_{i=1}^m\alpha_iy^{(i)}=0
\end{cases}$$
Where each $\alpha_i$ is the Lagrange multiplier associated with the input sample $x_i$.
Furthermore, it can be proved that, once we have determined the optimal values of the Lagrange multipliers, the normal vector of the hyperplane can be computed by:
$$ \mathbf{w}=\sum_i^m \alpha_i y^{(i)}x^{(i)}$$
Now we can see that only the vectors with an associated value of $\alpha_i\neq0$ will contribute to the computation of the hyperplane. This vectors are support vectors.
Now the question is: When $\alpha_i \neq 0$?
As explained in the notes linked above, the values of $\alpha_i \neq 0$ are derived from the KKT conditions which are needed to be satisfied in order to find the values of $\alpha_i$ that minimize our cost function. These are:
- $\alpha_i = 0 \implies y^{(i)}(\mathbf{w}^Tx^{(i)}+b)\geq 1$
- $\alpha_i = C \implies y^{(i)}(\mathbf{w}^Tx^{(i)}+b)\leq 1$
- $0<\alpha_i < C \implies y^{(i)}(\mathbf{w}^Tx^{(i)}+b)= 1$
So, in conclusion the vectors (samples) that lie on the margins (condition number 3 and condition number 2 when $=1$), the vectors that are correctly classified but lie between the margins and the hyperplane (condition number 2 when $<1$) and the vectors that are misclassified (also condition number 2 when $<1$) are the ones with $\alpha_i \neq0$ and therefore contribute to the computation of the hyperplane $\rightarrow$ they are support vectors.
|
115013
|
1
|
115016
| null |
1
|
44
|
I am trying to build N gram models to predict the missing prepositions of a text corpus.
I would want to have some guidance on if I'm understanding and doing things correctly.
So the N gram model is basically just a collection of posterior probabilities? Pr(this word | previous words)?
Then how is this machine learning I wonder? Since we would get a deterministic set of probabilities based on the frequencies of the word combinations from the training set. There doesn't seem to be any parameters to learn except in interpolation (like the weights of each gram in their weighted sum).
As for the actual prediction of preposition, after getting a set of the posterior probabilities of all the words in the vocabulary, do I simply only compare the posterior probabilities of the few known prepositions and find the argmax as the prediction?
Appreciate any help, thanks!
|
N-gram language model for preposition predition
|
CC BY-SA 4.0
| null |
2022-10-08T08:31:41.723
|
2022-10-08T11:22:03.040
| null | null |
141317
|
[
"machine-learning",
"nlp",
"ngrams"
] |
Your understanding is mostly correct, but don't forget that you can not only take into account the previous tokens, you can also consider:
- p(this word | next words)
- p(this word | N previous word,N next words)
- etc.
Often a combination of these probabilities offers optimal results.
>
how is this machine learning I wonder?
Well, ML is very often a deterministic calculation, it doesn't have to be a complex approximation problem: decision trees, Naive Bayes, linear regression...
>
do I simply only compare the posterior probabilities of the few known prepositions and find the argmax as the prediction?
Yes, that would be the idea. Of course if you use multiple models as suggested above there must be some kind of combination additionally.
|
spaCy - Text Preprocessing - Keeping "Pronouns" in text
|
What you are doing seems fine in terms of preprocessing. Removing less informative words like stopwords, punctuation etc. is a very common technique. Here are some of my notes:
- probably best for speed to load your "nlp" object outside of the function call
- "-PRON-" must be the lemma for "you're" in this case. So you shouldn't remove it as it is following the correct logic. Though, you should investigate whether your model improves if you change your preprocessing logic
if you use SpaCy to create your classification model, I'd recommend experimenting training a model without using any preprocessing. As far as I know, SpaCy handles some of this implicitly through feature engineering
|
115020
|
1
|
115023
| null |
2
|
83
|
I built a predictive model using an elastic net regression model with sklearn. The model R2 = 0.015. I know SHAP method could provide the importance of the features. However, How to calculate the significance of each feature? (Get which feature is significant or which features successfully predict the response.This way, I can tell my story in the paper and discuss these features in detail.)
As far as I know, R package "eNetXplorer" can do this by permutation test, but I have identified a useful elastic net model via Scikit-learn.Is there a similar package in the python environment?
Any help is greatly appreciated!
|
How to calculate the significance of each feature?
|
CC BY-SA 4.0
| null |
2022-10-08T14:57:52.110
|
2022-10-09T03:36:44.207
|
2022-10-09T03:36:44.207
|
141014
|
141014
|
[
"machine-learning",
"scikit-learn",
"elastic-net"
] |
Assuming you are talking about getting a features impact scores ranking (i.e. sort features by their relevance on model predictions), I would go for a permutation importance methodology. It is a model-agnostic approach which you can use providing an already fit model and a evaluation dataset.
The concept is to relate the highest drops in model performance with the most important features, when the values of these ones are shuffled. The process it follows could be defined by the following steps:
- Make model predictions on a sample of records
- Select a column, shuffle its values and predict again with the model
- Get the the drop (if any) of model performance on this new shuffled dataset VS the initial one
- Average this difference across all predictions
- Repeat the steps above for all features
Scikit-learn provides something like this with its [permutation importance functionality](https://scikit-learn.org/stable/modules/generated/sklearn.inspection.permutation_importance.html#sklearn.inspection.permutation_importance), I hope it helps. Other source of info [here](https://app.eu.datarobot.com/docs/modeling/analyze-models/understand/feature-impact.html)
|
How to get significance level for ranked features?
|
1. univariate feature importance (that is c) in your list)
You are correct. Univariate statistics to estimate feature importance does not capture feature interactions. But they are fast and simple.
2. model-based feature importance (that are a) and b) in your list)
On the other hand model-based feature importance estimates can capture interactions as long as the model is capable of doing so (see "Introduction to Machine Learning with Python"; Mueller, Guido; 2017; p. 238/239). Which is not the case for linear regression.
For model-based feature importance estimates using trees there are ways to derive p-values. And at least R does have some implementations for that. Have a look at section "2.5 Importance testing procedures" in this paper [The revival of the Gini importance?](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6198850/pdf/bty373.pdf).
|
115029
|
1
|
115030
| null |
3
|
114
|
I am using Weka to train a model from few days. I know Weka use Java code to implement a classifier. I also heard that Weka has some github pages to describe the java code for the classifiers. I like to know the SVM java code which is used in WEKA. I found few webpages describing Java code for SVM classifiers for WEKA. I can not understand which one is there official page.
Providing me the link to SVM GitHub page would be very helpful.
Thank you.
|
Official page of Weka for SVM java code
|
CC BY-SA 4.0
| null |
2022-10-08T21:18:32.727
|
2022-10-08T21:53:29.923
| null | null |
63745
|
[
"classification",
"svm",
"weka"
] |
The official Weka source code is [stored in a local Gitlab git repository](https://waikato.github.io/weka-wiki/git/) (not on Github).
Note that there are two versions of SVM commonly used with Weka:
- The SMO classifier, source code here.
- The LibSVM wrapper: an external library that can be used in Weka (i.e. the source code is not part of Weka).
|
What does the “numDecimalPlaces” in J48 classifier do in WEKA?
|
The short answer is: nothing. The `numDecimalPlaces` option, like `debug` and `doNotCheckCapabilities`, is part of the base class that all classifiers in WEKA inherit from. However, it is up to the implementation of the actual classifier to use this value to change the way the model is printed. There are no mentions of `numDecimalPlaces` anywhere in the source code for J48, which suggests to me that it does nothing in this case.
It looks like the only classifier in WEKA that does anything with the value entered into `numDecimalPlaces` is `LinearRegression`.
|
115033
|
1
|
115038
| null |
1
|
94
|
I'm looking for resources (books/articles/whatever) that provide mathematical formalization of NLP and statistical language theory. By that I mean clear exposition of the subject in terms of probability spaces (measure spaces) and so on. For example, many NLP books (like the Manning's one) use n-gram models which, as I see, may be modelled as Markov processes with word-states, but neither book states explicitly how the probability space for the process is constructed (I guess, there's something related to probabilities on formal languages?). I need such clear expositions. Thanks in advance.
|
Mathematically rigorous NLP
|
CC BY-SA 4.0
| null |
2022-10-09T09:28:57.473
|
2022-10-09T15:53:37.540
| null | null |
140787
|
[
"nlp",
"probability"
] |
My personal recommendation would be [Introduction to Natural Language Processing](https://mitpress.mit.edu/9780262042840/introduction-to-natural-language-processing/) by Jacob Eisenstein.
In this book you should find sufficient mathematical formalization/rigor. This books is also, in my opinion, a touchstone of many introductory NLP books.
|
Good NLP model for computationally cheap predictions that can reasonably approximate language model given large training data set
|
There are several smaller BERT models, including [bert-tiny](https://github.com/google-research/bert). Bert-tiny is a distillation of the full BERT model.
|
115077
|
1
|
115088
| null |
0
|
120
|
I reduce data with PCA already from 9 to 3 feature. If I have real data new row which I want to use with pre-train model (.h5). Can I change data 9 feature to PCA 3 feature only one row for test with model ?
```
import numpy
from pandas import read_csv
from sklearn.decomposition import PCA
# load data
url = "https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.csv"
names = ['preg', 'plas', 'pres', 'skin', 'test', 'mass', 'pedi', 'age', 'class']
dataframe = read_csv(url, names=names)
array = dataframe.values
X = array[:,0:8]
Y = array[:,8]
# feature extraction
pca = PCA(n_components=3)
fit = pca.fit(X)
# summarize components
print("Explained Variance: %s" % fit.explained_variance_ratio_)
print(fit.components_)
```
|
How to use new data with Principal Component Analysis (PCA)
|
CC BY-SA 4.0
| null |
2022-10-10T15:46:49.430
|
2022-10-10T20:47:44.093
| null | null |
43613
|
[
"machine-learning",
"python",
"pca"
] |
Simply use `pca.transform(test_data)`. That is:
```
test = X[22].reshape(1, -1)
pca.transform(test)
array([[-72.73967494, -86.24860793, -5.94958303]])
```
I used a random index of your own train set to illustrate the use case but you can input any array of size 8 here.
>
Can I change data 9 feature to PCA 3 feature only one row for test with model ?
You have to input 8 features (9 features except your target value).
|
Data analysis PCA
|
PCA is [not recommended](https://www.researchgate.net/post/Should_I_use_PCA_with_categorical_data) for categorical features. There are equivalent algorithms for categorical features like [CATPCA](http://amse-conference.eu/history/amse2015/doc/Sulc_Rezankova.pdf) and MCA.
[](https://i.stack.imgur.com/XEaVY.png)
|
115089
|
1
|
115096
| null |
0
|
83
|
I am trying to train a machine learning model to help me classify some real data. Since the acquisition and labeling of real data can be very expensive, the training data is generated with simulation. However, the trained model doesn't perform very well on real data, my suspicion is that the simulation is not a 100% accurate representation of real data. Therefore, I am wondering will the performance be improved if I train the model with a mixture of simulation and real data (say 20% real data). I would greatly appreciate it if you could either answer the question or point me to the right reference!
|
Does combining real data with simulation data improve the performance of machine learning?
|
CC BY-SA 4.0
| null |
2022-10-10T21:16:40.733
|
2022-10-11T06:37:38.403
| null | null |
141397
|
[
"machine-learning",
"machine-learning-model"
] |
I suggest that you add an extra input binary variable indicating whether the data is simulated or not. For the simulated data, you would set it to 1, while for real data you would set it to 0. This may help the model profit from simulated data while still being able to do well in real data.
This advice is inspired by something we use in machine translation called "[tagged back-translation](https://aclanthology.org/W19-5206/)". When we are training a translation system from language A to language B, if we have a small training dataset A → B but we have a lot of monolingual data in language B, we first train a translation system from B to A and then use it with the monolingual B data to obtain a synthetic dataset used to train our final A → B system. The final system, however, performs better if we indicate as part of the input if the data is real or synthetic.
|
Using simulations to train ML algorithms
|
>
Is it possible to use data generated by a huge number of simulations to train a classification algorithm to perform this detection online?
Yes, it is always possible to train a classification algorithm when you have labeled [i.i.d.](https://en.wikipedia.org/wiki/Independent_and_identically_distributed_random_variables) training data, and there is no hard reason why you cannot use a simulator to generate that.
Whether or not such a trained model is fit for purpose is hard to say in advance of trying it.
Using a simulation as your data source has some benefits:
- Generating more training and test data is straightforward.
- You will automatically have high quality ground truth labels (assuming your goal is to match the simulation).
- If you find a problem with certain parameter values, you can target them when collecting more training data.
Just as with data taken from real world measurements, you will need to test your results to get a sense of how accurate your model is.
>
What are the considerations when using simulated data to train an algorithm that will then be used online with real data (expect from the obvious that the simulation needs to be very very accurate)?
- Your model is a function approximator. At best it will match the output of the simulator. In practice it will usually fall short of it by some amount. You will have to measure this difference by testing the model, and decide whether the cost of occasional false negative or false positive is outweighed by the performance improvement.
- Statistical machine learning models perform best when interpolating between data points, and often perform badly at extrapolating. So when you say that inputs can vary infinitely, hopefully that is within some constrained parameter space of real values, as opposed to getting inputs that are completely different from anything you have considered before - the simulation would cope with such inputs, but a statistics-based function approximator most likely would not.
- If your simulation has areas where the class switches rapidly with small changes in parameter values, then you will need to sample densely in those areas.
- If your simulation produces near chaotic behaviour in any region (class value varies a lot and is highly sensitive to small changes in value of one or more parameters), then this is something that is very hard to approximate.
- If you have some natural scale factor, dimensionless number or other easy to compute summary of behaviour in your physical system, it may be worth using it as an engineered feature instead of getting the machine learning code to figure that out statistically. For instance, in fluid dynamics, the Reynolds number can characterise flow, and could be useful feature for neural network predicting vortex shedding.
>
Any references to such examples?
The examples I have found here are about are in renderings of fluid simulations and other complex physical systems where a full simulation can be approximated and they all use neural networks to achieve a speed improvement over full simulation.
- Accelerating Eulerian Fluid Simulation With Convolutional Networks - I saw a video about this on YouTube's Two Minute Papers channel.
- Using neural networks in weather prediction ensembles to improve performance
- Fast Neural Network Emulation of Dynamical Systems for Computer Animation
However, I don't think any of these are classifiers.
|
115092
|
1
|
115129
| null |
1
|
32
|
Suppose I have a list of weighted keywords/phrases, such as "solar panel", "rooftop", etc. The weights are in [0,1] with higher weights indicating a stronger preference for specific keywords, so "solar panel" may have a weighting of 0.3 and "rooftop" may have a weighting of 0.2, for example. The sum of keyword weights is 1.
For each keyword/phrase, I additionally have a number of contextual sentences which are also weighted and carry a positive, negative, or neutral sentiment/connotation. For example, one contextual sentence related to the "solar panel" phrase might be "good for the environment" which is labelled with a positive sentiment and carries a weight of 0.2. The sum of weights for each keyword's contextual sentences is 1, so the sum of weights for all contextual sentences across all keywords is N, where N is the number of individual keywords.
Finally, I also have weighted linkages in [0,1] between keywords/phrases which, again, sum to 1. For example, the directed linkage from "solar panel" to "rooftop" may have a weight of 0.2 while the directed linkage from "rooftop" to "solar panel" may have a weight of 0.4.
I would like to use these weighted keywords, phrases, contextual sentiment-labelled sentences and linkages to create a summary in natural language. I realise that I'm working in reverse from the typical text summarisation objective, but I believe that the richness of my data should make the task a little easier.
How should I approach it? Should I first use a model to summarise the text contained within each of the contextual sentences before attempting to extract more basic keywords that can be used to generate summary text? How should I process the data? Is it worth pursuing a two-step approach, where a basic model summarises the keywords and contextual sentences in basic language before a secondary model transforms it to richer, more natural language?
I would be very grateful for any guidance or recommendations.
Edit: I'm very new to NLP, so I apologise for my lack of terminology and mathematical formalism.
|
Abstracted text summarisation and generation from weighted keywords
|
CC BY-SA 4.0
| null |
2022-10-11T00:57:03.837
|
2022-10-11T14:02:19.193
| null | null |
141409
|
[
"nlp",
"text-mining",
"sentiment-analysis",
"text-generation"
] |
If you have data with a good score system, I would start with something simple, because using a neural network like Bert might be complex to set up.
Something simple is to take the scores and build a phrase with meaning, for instance: "solar panel" + "rooftop" + "environment-friendly" = "Rooftop solar panel, with a low environmental impact (less than 8g of carbon/year)".
You can achieve this using if/then rules and some basic equations if there is numerical values. For example, 0.2 for the environmental impact would be something like (1-0.2)*10 = 8g.
Then you can improve results with a neural network like Bert, but you would need enough data to train it, using different inputs
("0.2,0.6,0.1") and their associated outputs (-> "Rooftop solar panel, with a low environmental impact (less than 8g of carbon/year)") and this train data should be representative enough of most common use cases.
See:
[https://chriskhanhtran.github.io/posts/extractive-summarization-with-bert/](https://chriskhanhtran.github.io/posts/extractive-summarization-with-bert/)
|
Text summarization with limited number of words
|
You sure can,
for example in [latent semantic analysis](https://en.wikipedia.org/wiki/Latent_semantic_analysis) you can fixate number of topics (which is actually size of the decomposition matrix) beforehand.
|
115094
|
1
|
115099
| null |
-1
|
163
|
Is there any machine learning algorithm developed to convert an English active voice sentence into a passive voice sentence? And what are the datasets available related to that purpose? And also if there are available source codes related to that research idea please mentioned them too.
|
Convert English active voice sentences into passive voice sentences using Machine learning
|
CC BY-SA 4.0
| null |
2022-10-11T06:22:39.070
|
2022-10-12T13:38:52.990
|
2022-10-12T13:38:52.990
|
141415
|
141415
|
[
"machine-learning",
"python",
"nlp"
] |
There is various Paraphrasing tool available for Conversion. For datasets, you can search on Kaggle and IEEE Dataport.
I hope you find some of these useful.
|
NLP techniques for converting from a direct speech to a reported speech
|
What do you mean by reported speech? It might be easier to help out if you could elaborate on the end goal. What are you trying to do?
EDIT
I see so what you are looking to do is to translate between active and passive voice. When it comes to techniques to do this I found several options:
- You can train a model using a long short-term memory (LSTM) recurrent neural networks (RNNs) model to detect whether it is an active or passive model. From there you can then work on the translation part. Example for model here and translation here.
- It looks like you will need to familiarize yourself with the Spacy python library. An example of how this can be used in your case can be found here.
- Another, though more unorthodox approach is through the use of the program language Prolog. A paper on this subject can be found here with the accompanying code here.
- Here is a short primer on using Prolog for your specific task.
|
115172
|
1
|
115180
| null |
1
|
127
|
I have a `CSV` file with salary information and other columns.
[](https://i.stack.imgur.com/TTzcv.png)
I am trying to transform some of these columns into proper values, for a `LinearRegression` and a `SGDRegressor`, or some other. Because, I don't think that the `LinearRegression` in `sklearn` can handle the data bits as is.
Data:
- 607 records
- Numerical columns: year, salary, salary in USD
- Categorical columns: experience, type, residence, currency, remote work, company location, and company size.
- Target: salary in USD
Encoding:
```
# Import neccessary encoder
from sklearn.preprocessing import OneHotEncoder
# Encoding of categorical data
encoder = OneHotEncoder(sparse=False)
# Extract columns
columns = data[['Experience', 'Type', 'Residence', 'Remote work', 'Company location', 'Company size']]
```
Questions:
- How to group any data within the categories (to avoid duplicates)?
- Is OneHotEncoder the recommended way of doing this?
|
Encoding for Linear Regression
|
CC BY-SA 4.0
| null |
2022-10-12T18:20:20.070
|
2022-10-12T22:44:24.637
|
2022-10-12T19:50:04.283
|
141487
|
141487
|
[
"machine-learning",
"regression",
"categorical-data",
"encoding",
"categorical-encoding"
] |
Comments:
- You should not group any data even if there are duplicates, because this would distort the distribution of the values (features and target).
- OneHotEncoder should be used on the categorical features only. Even with those, mind that values which are too rare should usually be removed or replaced in order to avoid overfitting.
- Some algorithms work better with numerical features scaled.
- Linear regression is unlikely to work well with some complex data in my opinion. Personally I like to try decision tree regression for this kind of mixed dataset.
|
Encoding for classifiers
|
You will have to use a mix of text processing and one hot encoding. Text column should not be treated as one-hot encoded since it will try to create one new variable for every unique sentence in the dataset, which will be a lot (and not very helpful from learning). Text vectorizer will summarize text column based on type of words/tokens that appear in it.
So you should use a [text vectorizer](https://scikit-learn.org/stable/modules/generated/sklearn.feature_extraction.text.TfidfVectorizer.html) for processing only the text column first. This will give you one data-frame (say A). This data-frame will have columns corresponding to tokens/words in the dataset. So if text vectorizer picks up 100 unique words then you will have a 1000x100 size data-frame. Note that these 100 columns have been generated only by text vectorizer.
For symbols and notes, you can use one-hot encoding, which will get you another data-frame (say B). Then you should join A & B on common key to get your final data-frame if input. The common key here will be row ID (though read the following comment on aggregating data at user level).
It is not clear if the user name (Account) column unique in the data? If there are 1000 rows but only 400 users, that means there can be more than 1 row per user. In that case, you can consider aggregating the data at the user level (for text column, you can simply concat all strings for same user).
|
115188
|
1
|
115189
| null |
0
|
31
|
I'm a newbie to tensorflow / keras and I am currently working my way through Deep Learning with Python (2nd edition) by Francois Chollet.
I understand the basics of Computer vision and the MNIST examples but I'm not really interested in computer vision.
My question is do you think it would be safe to skip the computer vision chapters and just focus on my interest(s) in regression machine learning + time series forecasting?
Thank you
|
Tensorflow - do I need to learn computer vision before linear (timeseries) regression?
|
CC BY-SA 4.0
| null |
2022-10-13T06:29:36.733
|
2022-10-13T12:33:22.957
| null | null |
141499
|
[
"deep-learning",
"tensorflow",
"time-series",
"linear-regression"
] |
Computer vision is not needed to learn time series forecasting.
Also, as you have mentioned Machine Learning and regression in that case Deep learning itself is optional. you can build model without using Deep Learning.
If you want to use LSTM in Time Series, then you need Deep Learning.
|
Microsoft custom vision vs Tensorflow model?
|
There are many differences as these are inherently complete different products with different goals.
- customvision
[+] cloud deployment comes out of the box (including a rest API)
[+] labelling tool to add data and label them
[-] you have no control over the learning algorithm
[-] difficult to run your model locally/completely for free
- tensorflow (or any framework really)
[-] you need deploy your model yourself
[-] you need to manage your data yourself
[+] you have full control over your network and how you train it
[+] you can embed your model into your code, run it locally, whatever you feel like
|
115231
|
1
|
115249
| null |
1
|
33
|
I have a corpus of about one billion sentences, in which I am attempting to resolve NER conflicts (when two terms overlap in a sentence). My initial plan is to have an SME label the correct tag in each of a large number of conflicts, then use those labels to train either an NER model or a binary classification model (like GAN-ALBERT), to identify the correct choice when two NER tags conflict.
The problem is, about 5% of these sentences contain conflicts, and I don't think that I have the computational resources to run BERT or ALBERT prediction on 50 million sentences in a reasonable amount of time. So, my hope is to use the ALBERT model to generate a large number of labels (perhaps one million) for a computationally cheaper model.
So, I'm wondering if there is a model, 10 to 100 times cheaper at prediction than BERT, that could be trained to do a reasonable job of replicating the ALBERT model's performance, given a large amount of training data generated by said model.
|
Good NLP model for computationally cheap predictions that can reasonably approximate language model given large training data set
|
CC BY-SA 4.0
| null |
2022-10-14T15:01:01.200
|
2022-10-15T13:04:34.963
| null | null |
74497
|
[
"machine-learning",
"nlp",
"bert"
] |
There are several smaller BERT models, including [bert-tiny](https://github.com/google-research/bert). Bert-tiny is a distillation of the full BERT model.
|
Mathematically rigorous NLP
|
My personal recommendation would be [Introduction to Natural Language Processing](https://mitpress.mit.edu/9780262042840/introduction-to-natural-language-processing/) by Jacob Eisenstein.
In this book you should find sufficient mathematical formalization/rigor. This books is also, in my opinion, a touchstone of many introductory NLP books.
|
115242
|
1
|
115245
| null |
0
|
66
|
I am trying to make predictions (using Weka) on a tabular dataset. It is a `categorical dataset` which is encoded by `label encoder`.
I got a good result for SVM and Logistic Regression, namely the accuracy is around 85%.
The dataset is high-dimensional and I like to fine-tune my accuracy.
So, I am thinking about the feature selection method. I found different feature selection techniques, such as `CfsSubsetEval`, `Classifier Attribute eval`, `classifier subset eval`, `Cv attribute eval`, `Gain ratio attribute eval`, `Info gain attribute eval`, `OneRattribute eval`, `principal component`, `relief f attribute eval`, `Symmetric uncertainty`, `Wrapper subset eval`.
I would like to know which one would be the best for the dataset that shows good accuracy with Logistic Regression or SVM?
|
Select the best feature selection method for classification
|
CC BY-SA 4.0
| null |
2022-10-15T04:54:10.303
|
2022-10-15T16:06:01.337
|
2022-10-15T16:06:01.337
|
79520
|
63745
|
[
"classification",
"feature-selection",
"weka"
] |
I don't think that there is a single feature selection method that works best with a specific algorithm, what they do is selecting the best features based on various criteria. These features can be useful or not to the algorithm that does the classification, regardless what this algorithm is.
Without knowing anything about your data or their distribution, you can simply try a lot of those methods to see which produces the best results, and see if these generalize with the test set.
Also, SVM itself can be used for feature selection, since it finds the optimal coefficient for each feature. I don't know if you can access those coefficients through Weka (sorry, not familiar with the software), but if you could they can be an indicator of how important each feature is.
|
How to do feature selection for classification problem? Which technique will work?
|
In model building there is a sort of iterative workflow that you can use:
- Select an appropriate model you want to build e.g. for classification maybe a XGB classifier or a logistic regression, etc.
This is important because the model by itself will determine a lot about how to wrangle your data. XGB only works with numerical features so you will have to convert factors/strings to a numerical encoding e.g. via One-Hot-Encoding.
- Build a full model using all features you can!
Some features will naturally fall out in the first step because the amount of feature extraction you have to do, to use them is too much to start. All other features, simply throw them into your model!
- Validate your model using classical validation methods (e.g. cross-validation, split-sample, etc.) and see how it performs!
If the performance is already great, perfect you are done! Otherwise you have a baseline against which to optimize the next steps.
- Play around with feature importance and removing features
Extract the feature importance from the full model and see if removing features with low importance improves your performance.
- Add features
At some point you will hit a wall in improving the model by simply removing irrelevant features (this might even be after removing 0 or 1 features). Now it is time to add by engineering additional features. Maybe now it is time to brush up your NLP skills and get some features out of the free-text variable you removed before.
- Rinse-and-repeat for other models to crown a winner in a model beauty contest
|
115251
|
1
|
118494
| null |
1
|
147
|
Is there utility in using different tokens for end-of-sentence, start-of-sentence, and padding for autoregressive sequence modeling (i.e. text generation)?
Or can I use the same token for all of them?
|
Using different tokens for padding, end-of-sentence, and start-of-sentence in autoregressive sequence modeling?
|
CC BY-SA 4.0
| null |
2022-10-15T15:34:03.813
|
2023-02-24T18:48:04.917
| null | null |
141192
|
[
"nlp",
"text-generation"
] |
Normally start-of-sequence is the same as end-of-sequence, that is, usually you use the end-of-sequence token to mark the start of the sequence.
The padding token is usually different, because that way you can easily compute the masks to use to mark which tokens should be ignored. end of sequence and start of sequence positions should not be ignored, so it's useful to have different padding than start/end of sequence.
|
Padding sequences for neural sequence models (RNNs)
|
Padded values are noise when they are regarded as actual values. For example, a padded temperature sequence `[20, 21, 23, 0, 0]` is the same as a noisy sequence where sensor has failed to report the correct temperature for the last two readings. Therefore, padded values better be cleaned (ignored) if possible.
Best practice is to use a `Mask` layer before other layers such as LSTM, RNN,.. to mask (ignore) the padded values. This way, it does not matter if we place them first or last.
Check out [this post](https://datascience.stackexchange.com/a/48814/67328) (my answer) that shows how to pad and mask the sequences with different length (with a sample code). You can experiment with the code to see the effect of removing the mask (treating padded values as actual values) on the deterioration of model performance.
This is the python snippet for quick reference:
```
model.add(Masking(mask_value=special_value, input_shape=(max_seq_len, dimension)))
model.add(LSTM(lstm_units))
```
where `special_value` is the padded value that should not have overlap with actual values.
|
115267
|
1
|
115269
| null |
1
|
518
|
I'm building an internal semantic search engine using BERT/SBERT + ElasticSearch 8 where answers are retrieved based on their cosine similarity with a query.
The documents to be searched are somewhat domain-specific, off the top of my head estimation is that about 10% of the vocabulary is not present in Wiki or Common Crawl datasets on which BERT models were trained. These are basically "made-up" words - niche product and brand names.
So my question is:
- Should I pre-train a BERT/SBERT model first on my specific corpus to learn the embeddings for these words using MLM?
or
- Can I skip pre-training and start fine-tuning a selected model for Q/A using SQUAD, synthetic Q/A based on my corpus and actual logged user queries?
My concern is that if I skip #1 then a model would not know the embeddings for some of the "made up" words, replace them with "unknown" token and this might lead to worse search performance.
|
Fine tuning BERT without pre-training it on domain specific corpus
|
CC BY-SA 4.0
| null |
2022-10-16T05:55:45.497
|
2022-10-16T08:02:19.770
| null | null |
98201
|
[
"nlp",
"bert",
"search"
] |
Is your corpus big enough? (= several GBs)
If yes, you could train a model from scratch and have good results.
[https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6](https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6)
If not, fine-tuning should be better. You can always try to train it from scratch but you might have sometimes wrong results. Perhaps you can add some training data from similar sources to reach an optimal result.
[https://www.tensorflow.org/tfmodels/nlp/fine_tune_bert](https://www.tensorflow.org/tfmodels/nlp/fine_tune_bert)
|
Bert Fine Tuning with additional features
|
To add additional features using BERT, one way is to use the existing WordPiece vocab and run pre-training for more steps on the additional data, and it should learn the compositionality. The WordPiece vocabulary can be basically used to create additional features that didn't already exist before.
Another approach to include additional features would be to add more vocab while training. Following approaches are possible:
- Just replace the "[unusedX]" tokens with your vocabulary. Since these were not used they are effectively randomly initialized.
- Append new vocabulary words to the end of the vocab file, and update the vocab_size parameter in bert_config.json. Later, write a script which generates a new checkpoint that is identical to the pre-trained checkpoint, but with a bigger vocab where the new embeddings are randomly initialized (for initialized we used tf.truncated_normal_initializer(stddev=0.02)). This will likely require mucking around with some tf.concat() and tf.assign() calls.
Please note that I haven't tried any of these approaches myself.
|
115276
|
1
|
115277
| null |
0
|
268
|
I have a problem. I have a CNN model which is used for an NLP problem. This is written in Python. I have questions about this, which I can't find an answer to.
- Why is ReLu used inside the Conv1D layer and not Softmax ?
- Why is ReLu used again as activation function in the first Dense-Layer and why Softmax afterwards ?
```
model1 = Sequential()
model1.add(
Embedding(vocab_size
,embed_size
,weights = [embedding_matrix] #Supplied embedding matrix created from glove
,input_length = maxlen
,trainable=False)
)
model1.add(Conv1D(256, 7, activation="relu"))
model1.add(MaxPooling1D())
model1.add(Conv1D(128, 5, activation="relu"))
model1.add(MaxPooling1D())
model1.add(GlobalMaxPooling1D())
model1.add(Dense(128, activation="relu"))
model1.add(Dense(number, activation='softmax'))
print(model1.summary())
```
```
|
CNN model why is ReLu used in Conv1D layer and in the first Dense Layer?
|
CC BY-SA 4.0
| null |
2022-10-16T13:31:12.100
|
2022-10-16T14:09:45.343
| null | null |
130860
|
[
"nlp",
"convolutional-neural-network",
"convolution",
"activation-function"
] |
The softmax activation is used as the activation function of the last layer in multiclass classification problems because it gives a categorical probability distribution over N discrete options.
ReLU is used as a middle-layer (either convolution or dense) activation function because it is a non-linearity that works well and is robust to the vanishing gradient problem (as opposed to tanh or sigmoid).
|
How are 1x1 convolutions the same as a fully connected layer?
|
## Your Example
In your example we have 3 input and 2 output units. To apply convolutions, think of those units having shape: `[1,1,3]` and `[1,1,2]`, respectively. In CNN terms, we have `3` input and `2` output feature maps, each having spatial dimensions `1 x 1`.
Applying an `n x n` convolution to a layer with `k` feature maps, requires you to have a kernel of shape `[n,n,k]`. Hence the kernel of your `1x1` convolutions have shape `[1, 1, 3]`. You need `2` of those kernels (or filters) to produce the `2` output feature maps. Please Note: $1 \times 1$ convolutions really are $1 \times 1 \times \text{number of channels of the input}$ convolutions. The last one is only rarely mentioned.
Indeed if you choose as kernels and bias:
$$
\begin{align}
w_1 &=
\begin{pmatrix}
0 & 1 & 1\\
\end{pmatrix} \in \mathbb{R}^{3}\\
w_2 &=
\begin{pmatrix}
2 & 3 & 5\\
\end{pmatrix} \in \mathbb{R}^{3}\\
b &= \begin{pmatrix}8\\ 13\end{pmatrix} \in \mathbb{R}^2
\end{align}
$$
The conv-layer will then compute $f(x) = ReLU\left(\begin{pmatrix}w_1 \cdot x\\ w_2 \cdot x\end{pmatrix} + \begin{pmatrix}b_1\\ b_2\end{pmatrix}\right)$ with $x \in \mathbb{R}^3$.
## Transformation in real Code
For a real-life example, also have a look at my [vgg-fcn](https://github.com/MarvinTeichmann/tensorflow-fcn/blob/d04bc268ac6e84f03afc4332d7f54ecff22d1732/fcn32_vgg.py) implementation. The Code provided in this file takes the VGG weights, but transforms every fully-connected layer into a convolutional layers. The resulting network yields the same output as `vgg` when applied to input image of shape `[244,244,3]`. (When applying both networks without padding).
The transformed convolutional layers are introduced in the function `_fc_layer` (line 145). They have kernel size `7x7` for FC6 (which is maximal, as `pool5` of VGG outputs a feature map of shape `[7,7, 512]`. Layer `FC7` and `FC8` are implemented as `1x1` convolution.
## "Full Connection Table"
He might refer to a filter/kernel which has the same dimension as the input feature map. In both cases (Code and your Example) the spatial dimensions are maximal in the sense, that the spatial dimension of the filter is the same as the spatial dimension as the input.
|
115280
|
1
|
115281
| null |
0
|
888
|
I am trying to use Linear Regression, to predict salary in USD. I have the following data:
[](https://i.stack.imgur.com/TTzcv.png)
Data:
- 607 records
- Numerical columns: year, salary, salary in USD
- Categorical columns: experience, type, residence, currency, remote work, company location, and company size.
- Target: salary in USD
Preprocessing dataset:
```
from sklearn.preprocessing import OneHotEncoder, StandardScaler
from sklearn.compose import ColumnTransformer
# Columns to drop:
drop_cols = ['Currency', 'Company location', 'Salary', 'Title']
# Attributes of interest
num_attributes = ['Year']
one_hot_attributes = ['Experience', 'Type', 'Remote work', 'Residence', 'Company size']
# Drop columns:
data.drop(drop_cols, 1, inplace=True)
# Setup transformer for column:
preprocessor = ColumnTransformer([
('nums', StandardScaler(), num_attributes),
('one_hot', OneHotEncoder(drop='first', sparse=False), one_hot_attributes)],
remainder='passthrough')
```
Pipe:
```
from sklearn.pipeline import Pipeline
pipe = Pipeline(steps =[
('preprocessor', preprocessor),
('model', LinearRegression()),
])
pipe.fit(X_train, y_train)
```
Perform prediction:
```
prediction = pipe.predict(X_test)
pd.DataFrame({'original test set':y_test, 'predictions': prediction})
```
Error:
```
ValueError: Found unknown categories ['IR', 'HN', 'MT', 'PH', 'NZ', 'CZ', 'MD'] in column 3 during transform
```
|
ValueError: Found unknown categories ['IR', 'HN', 'MT', 'PH', 'NZ', 'CZ', 'MD'] in column 3 during transform
|
CC BY-SA 4.0
| null |
2022-10-16T15:40:32.750
|
2022-10-16T16:26:59.620
| null | null |
141487
|
[
"machine-learning",
"scikit-learn",
"linear-regression"
] |
This error is thrown by the `OneHotEncoder` class because your test dataset contains values for a column (likely the Residence column) that were not present in your training dataset. As specified in [the documentation](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.OneHotEncoder.html), the default for the `handle_unknown` argument is to throw an error when new values are encountered when `transform` is called. Setting `handle_unknown='ignore'` should stop the error from being thrown.
|
Error encoding categorical features using sklearn pipelines
|
In this line:
```
("impute_stage", Imputer(missing_values=np.nan, strategy="median"))
```
Because your input type is string, you shouldn't fill the null value to `median` (we cannot average string value).
From the [document](https://scikit-learn.org/stable/modules/generated/sklearn.impute.SimpleImputer.html), you can fill null value with a string constant like:
```
Imputer(missing_values=None, strategy="constant", fill_value="NULL")
```
to represent null value in your string field.
|
115291
|
1
|
115292
| null |
0
|
290
|
I'm trying to make a model for a multi-output regression task where $y=(y_1, y_2,..., y_n)$ is a vector rather than a single scalar. I am using Scikit-learn's `MultiOutputRegressor` method to train and make a model for each $y_i \in y$ separately. My code looks like this:
```
base_learner = lightgbm.LGBMRegressor(random_state=seed)
estimator = MultiOutputRegressor(regressor)
grid = {
# hyperpramters to check
# ...
# 'random_state': [500],
'n_estimators': [100, 500],
'num_leaves': [15, 31, 63],
'max_depth': [8, 10],
# 'min_data_in_leaf': [15, 25],
'feature_fraction': [0.3, 0.4],
'bagging_fraction': [0.4, 0.5],
# 'bagging_freq': [100, 200, 400],
"n_jobs": [-1],
"verbose": [-1]
}
gs = GridSearchCV(base_learner, param_grid=grid, scoring=my_custom_score, cv=10)
gs.fit(X_train, y_train)
```
As you can see, the base-learner for each $y_i$ is of type `lightgbm.LGBMRegressor`. (By base-learner, I mean each individual leaner used to learn and predict each $y_i$.) I want to do a grid search to pick the best hyperparameters for each base-learner. But I don't know how to pass the list of hyperparameters in the `grid` variable to the base learners that are wrapped in `MultiOutputRegressor`. When I run the shown code above, I get the following error:
[](https://i.stack.imgur.com/ERfMW.png)
Do you have any suggestion about how to pass hyperparameters to individual base-learners when one uses `MultiOutputRegressor` API? (Based on what I see in the error, `MultiOutputRegressor` itself only takes two parameters which are mainly for a using a leaner not passing hyperparameters to the underlying learners.)
|
Grid-search for a multi-output regression task using Scikit-learn's API
|
CC BY-SA 4.0
| null |
2022-10-17T06:34:12.233
|
2022-10-17T07:13:56.647
| null | null |
6656
|
[
"scikit-learn",
"regression",
"grid-search",
"lightgbm",
"multi-output"
] |
I assume you meant `GridSearchCV(estimator ...`, otherwise there's no wrapping here.
You'll need to supply a prefix:
```
'estimator__n_estimators': [100, 500],
'estimator__bagging_fraction': [0.4, 0.5],
```
and so on.
|
Scikit-learn pipeline with scaling, dimensionality reduction, average prediction of multiple regression models, and grid search cross validation
|
You might looking for [sklearn.ensemble.VotingRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingRegressor.html) which takes the mean of two regression models.
Here is an example to get you started:
```
from sklearn.datasets import make_regression
from sklearn.decomposition import PCA
from sklearn.ensemble import GradientBoostingRegressor, RandomForestRegressor, VotingRegressor
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
# Make fake data
X, y = make_regression(n_samples=1_000, n_features=20, random_state=42)
X_train, X_test, y_train, y_test = train_test_split(X, y,random_state=42)
pipe = Pipeline([('scl', StandardScaler()),
('pca', PCA()),
('vr', VotingRegressor([('gbr', GradientBoostingRegressor()), ('rfr', RandomForestRegressor())]))
])
search_space = [{'vr__gbr__learning_rate': [.07, .1, .15]}]
gs_cv = GridSearchCV(estimator=pipe,
param_grid=search_space,
n_jobs=-1)
gs_cv.fit(X_train, y_train)
gs_cv.predict(X_test)
```
|
115351
|
1
|
115358
| null |
0
|
35
|
I have a problem.
There is a dataset A, which deals with a classification problem. And for this dataset, several different baseline algorithms have been defined and computed.
In addition, three models were used: Logistic Regression, XGBoost and RandomForest.
Now my question is this, why use different algorithms (Logistic Regression, XGBoost and RandomForest) and investigate which one is the better algorithm? Is it because the algorithms have different strengths and perform better depending on the data set?
```
Algorithm Accuracy Precision Recall F1-Score
Baseline 1 0,20 0,20 0,20 0,20
Baseline 2 0,20 0,20 0,20 0,20
Logistic Regression 0,53 0,52 0,28 0,36
RandomForest 0,65 0,64 0,63 0,63
XGBoost 0,50 0,61 0,55 0,58
```
For example, RandomForest, gave the best result and then the hyperparameters are adjusted.
|
Why compare multiple machine learning algorithms and then decide which algorithm to use for fine tuning?
|
CC BY-SA 4.0
| null |
2022-10-18T18:16:49.560
|
2022-10-18T21:20:45.270
| null | null |
130860
|
[
"machine-learning",
"random-forest",
"xgboost",
"algorithms",
"metric"
] |
In machine learning there is something called the [no free lunch Theorem](https://en.m.wikipedia.org/wiki/No_free_lunch_theorem#:%7E:text=Wolpert%20had%20previously%20derived%20no,no%20easy%20shortcuts%20to%20success.), which basically states that there isn't one solution/algorithm that will perform best on every problem. Different algorithms will perform differently on different data. Therefore, you try different algorithms to pick the best, although there are algorithms that are generally more powerful for certain types of problems (...).
Hint: You usually pick the best algorithms without hyperparameter tuning in the first round and then compare the best ones again after tuning their hyperparameters since these can make a decisive performance difference - especially for more "complex" algorithms like XGBoost in contrast to Random Forest. Although both algorithms are decision-tree based.
|
How can I choose the best machine learning algorithms from all kinds of algorithms?
|
There is a theoretical result called the ["no free lunch theorem"](https://en.wikipedia.org/wiki/No_free_lunch_theorem) which proves that there is no "best ML algorithm" in general.
- It's important to understand how an algorithm works in order to have a good intuition about whether it's suitable for a case. Without this one can only attempt different methods randomly by trial and error, it takes more time and more effort.
- From what you describe it looks like your learning focused on how to use available tools (i.e. run the algorithms). If you want to become really good at data science you also need a good theoretical background.
- Data Science is very broad, nobody knows everything because it's impossible. My advice is to focus on understanding one thing very well before moving on to the next topic. It's usually better to be an expert in a specific area than to have a shallow knowledge of a bit of everything.
>
there are deep learning algorithms that are stronger than machine learning
Technically deep learning methods are also machine learning.
>
Or should I abandon learning machine learning algorithms and start learning deep learning?
In my opinion it's better to have a really good understanding of traditional ML methods before moving to DL.
|
115353
|
1
|
115356
| null |
0
|
434
|
I ran experiment to compare max-pooled word tokens vs CLS token for sentence classification and CLS clearly wins. Trying to understand how BERT generates CLS token embedding if its better than max or avg pooling.
|
How does BERT produce CLS token? Internally does it do max-pooling or avarage pooling?
|
CC BY-SA 4.0
| null |
2022-10-18T19:00:55.903
|
2022-10-18T20:21:19.840
| null | null |
141710
|
[
"nlp",
"transformer",
"bert"
] |
The output at the first position (which is the position the special token `[CLS]` is at the input sequence and is what you call the "CLS token") is neither computed with max-pooling or average pooling, but it is computed with self-attention, like the other output positions. The difference with the other output positions is that the first position is trained with the next sentence prediction (NSP) task. This means that the representation learned there is meant to predict whether the second part of the input (the subsequence after the `[SEP]` special token) was following the first part of the input in the original document.
You can check the details at section 3.1 of the [original BERT paper](https://arxiv.org/abs/1810.04805), within the "Task #2: Next Sentence Prediction (NSP)" subtitle. The following figure from the paper illustrates how the output at the first position is used for the NSP task:
[](https://i.stack.imgur.com/finfM.png)
|
From where does BERT get the tokens it predicts?
|
There is a token vocabulary, that is, the set of all possible tokens that can be handled by BERT. You can find the vocabulary used by one of the variants of BERT (BERT-base-uncased) [here](https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt).
You can see that it contains one token per line, with a total of 30522 tokens. The softmax is computed over them.
The token granularity in the BERT vocabulary is subwords. This means that each token does not represent a complete word, but just a piece of word. Before feeding text as input to BERT, it is needed to segment it into subwords according to the subword vocabulary mentioned before. Having a subword vocabulary instead of a word-level vocabulary is what makes it possible for BERT (and any other text generation subword model) to only need a "small" vocabulary to be able to represent any string (within the character set seen in the training data).
|
115359
|
1
|
115379
| null |
0
|
316
|
I am very confused about how decision trees select features and threshold within each feature to do the split. I totally understand the different splitting metrics (Gini index and so on) used and how they work. But my problem is how sklearn chooses features and thresholds to calculate these metrics.
The estimator `sklearn.tree.DecisionTreeClassifier` has a parameter `splitter`. Let me admit that all the resources available online are not that good in explaining this parameter and they are conflicting each other. I still don't understand what will happen if I set `splitter="best"`: does this means that the algorithm will consider all the features with all of its values to get the best threshold value? And in this case `max_features` attribute will not have any effect? And if I set `splitter="random"` the algorithm will randomly select certain numbers of features = `max_features` from the features and search each for certain random values of each of these features to find the threshold to split?
|
Splitter in decision trees in sklearn implementation
|
CC BY-SA 4.0
| null |
2022-10-18T21:48:16.957
|
2022-10-19T13:46:16.807
|
2022-10-19T13:46:16.807
|
55122
|
141085
|
[
"scikit-learn",
"decision-trees"
] |
>
if I set splitter="best", does this means that the algorithm will consider all the features with all of its values to get the best threshold value ?? and in this case max_features attribute will not have any effect ?
Not quite: `max_features` still has an effect here. `max_features` features are selected at random, but for each of those, the best among all possible thresholds is selected.
>
And if I set splitter="random" the algorithm will randomly select certain numbers of features = max_features form the whole features and search each for certain random values of each of these features to find the threshold to split ?
Right, `max_features` has the same effect regardless of the splitter, but when `splitter="random"`, instead of testing every possible threshold for the split on a feature, a single random threshold, drawn uniformly between the feature's minimum and maximum, is tested. [Source code](https://github.com/scikit-learn/scikit-learn/blob/3e6a39a73c2ca39e073e4b58117f59e92b3b2313/sklearn/tree/_splitter.pyx#L668-L671)
|
Decision Trees split in scikit
|
I got it thanks to the scikit team, I put the answer here for the people to come. The split used in scikit uses weights in calculating the Gini coefficient, just add the following lines before returning:
....
gk *= len(kids)/len(df)
ga *= len(adults)/len(df)
|
115368
|
1
|
115372
| null |
2
|
182
|
Background
I am teaching myself Pytorch, as a Mechanical engineering technology (MET) faculty. My end goal is to replace many data-driven heat transfer and Fluid dynamics models with Neural network approximations. This is a wholly academic exercise to expose my MET students to Neural networks via a familiar environment. I have some experience creating Neural Networks using the Wolfram Language.
The problem statement
>
Approximate $z_i = x_i^2 + y_i^2$ with a multi-layer feedforward percepteron where $x_i, y_i$ are randomly generated floats.
The issue I am faced with
The NN I created using Pytorch does not converge and I cannot tell if this is because of:
- Improper layer definitions. I have experimented with three layers (linear - ReLU or Tanh - linear) and the current one. I have experimented with different numbers of outputs from the first linear layer.
- Not sufficient epochs.
- Improper learning rate.
- The code itself is foul.
## I would greatly appreciate help or advice on this matter.
I have included my code. I would be happy to provide any other information.
Code
Setting up the data, NN layers, and the optimizer
```
import torch
import torch.nn as nn
import numpy as np
from sklearn.model_selection import train_test_split
x = np.random.random(1000);
y = np.random.random(1000);
z = x**2 + y**2
input_data = torch.Tensor(np.transpose([x ,y]))
output_data = torch.Tensor(z)
input_training, input_validation, output_training, output_validation = train_test_split(input_data, output_data, random_state=42, test_size=0.15, shuffle=True)
class NonLinearRegression(torch.nn.Module):
def __init__(self):
super(NonLinearRegression, self).__init__()
self.linear_1 = nn.Linear(in_features=2, out_features=10)
self.act_1 = nn.ReLU()
self.linear_2 = nn.Linear(in_features=10,out_features=5)
self.act_2 = nn.ReLU()
self.linear_3 = nn.Linear(in_features=5,out_features=1)
def forward(self, y):
y = self.linear_1(y)
y = self.act_1(y)
y = self.linear_2(y)
y = self.act_2(y)
y = self.linear_3(y)
y_pred = y
return y_pred
model_nonlinear = NonLinearRegression()
optimizer = torch.optim.SGD(model_nonlinear.parameters(), lr=1e-6)
criterion = nn.MSELoss(reduction='sum')
```
The NN training loop
```
epoch_max = 20000
for epoch in range(epoch_max):
total_loss = 0;
model_nonlinear.train()
y_pred = model_nonlinear(input_training)
loss = criterion(y_pred, output_training)
loss.backward()
total_loss += float(loss)
if (total_loss < 0.001):
print("Num steps: " + str(epoch))
break
optimizer.step()
```
Validation
```
input_validation, model_nonlinear(input_validation) #the math does not check out.
```
|
Pytorch Neural Network that tries to approximate $z_i = x_i^2 + y_i^2$ not converging to solution
|
CC BY-SA 4.0
| null |
2022-10-19T10:29:14.647
|
2022-10-19T12:15:03.163
|
2022-10-19T12:15:03.163
|
75157
|
31728
|
[
"python",
"neural-network",
"pytorch",
"convergence",
"universal-approximation-theorem"
] |
I think the main thing that is wrong is that your training loop is currently not resetting the gradients between epochs (using `optimizer.zero_grad()`). This causes gradients to accumulate, which stops your network from learning properly. Making this single change already massively improves the learning of your network, achieving a loss of around 6.5 after 20000 epochs. Some additional changes I've made that improve/speed up the learning even more are the following:
- Use the Adam optimizer instead of the SGD optimizer, the default learning rate of 0.001 seems to work fine.
- Use mini batches instead of the full training dataset.
- Increasing the number of parameters in your model.
These changes result in the following code:
```
# model definition
class NonLinearRegression(torch.nn.Module):
def __init__(self):
super(NonLinearRegression, self).__init__()
self.linear_1 = nn.Linear(in_features=2, out_features=25)
self.act_1 = nn.ReLU()
self.linear_2 = nn.Linear(in_features=25,out_features=10)
self.act_2 = nn.ReLU()
self.linear_3 = nn.Linear(in_features=10,out_features=1)
def forward(self, y):
y = self.linear_1(y)
y = self.act_1(y)
y = self.linear_2(y)
y = self.act_2(y)
y = self.linear_3(y)
y_pred = y
return y_pred
model_nonlinear = NonLinearRegression()
# changed optimizer
optimizer = torch.optim.Adam(model_nonlinear.parameters())
criterion = nn.MSELoss(reduction='sum')
# training loop
epoch_max = 2500
for epoch in range(epoch_max):
# mini-batch
ix = torch.randint(0, input_training.shape[0], size=(64,))
total_loss = 0
y_pred = model_nonlinear(input_training[ix])
loss = criterion(y_pred.squeeze(), output_training[ix])
# set gradients to zero
optimizer.zero_grad()
loss.backward()
total_loss += float(loss)
if (total_loss < 0.001):
print("Num steps: " + str(epoch))
break
optimizer.step()
```
With the following loss curve and predictions:
[](https://i.stack.imgur.com/tQ2G1.png)
[](https://i.stack.imgur.com/I5pxc.png)
|
Approximating multi-variable function with neural network in python
|
One problem that I see is that notice that sine is a function that takes value from $-1$ to $1$ but sigmoid function takes value from $0$ to $1$.
Hence you are being penalized whenever the sine value takes negative value.
You might like to try to change your last layer to a tanh layer or alternatively, rather than predicting sine directly, predict $\frac{\sin(2x_1+x_2)+1}{2}$ first.
I manage to achieve MSE of $0.228686$ using the tanh modification. Of course, you can still try to tune other parameters and try other stuff to improve the model.
```
import numpy as np
import matplotlib.pyplot as plt
import math
class Layer:
"""
Represents a layer (hidden or output) in our neural network.
"""
def __init__(self, n_input, n_neurons, activation=None, weights=None, bias=None):
"""
:param int n_input: The input size (coming from the input layer or a previous hidden layer)
:param int n_neurons: The number of neurons in this layer.
:param str activation: The activation function to use (if any).
:param weights: The layer's weights.
:param bias: The layer's bias.
"""
self.weights = weights if weights is not None else np.random.rand(n_input, n_neurons)
self.activation = activation
self.bias = bias if bias is not None else np.random.rand(n_neurons)
self.last_activation = None
self.error = None
self.delta = None
def activate(self, x):
"""
Calculates the dot product of this layer.
:param x: The input.
:return: The result.
"""
r = np.dot(x, self.weights) + self.bias
self.last_activation = self._apply_activation(r)
return self.last_activation
def _apply_activation(self, r):
"""
Applies the chosen activation function (if any).
:param r: The normal value.
:return: The "activated" value.
"""
# In case no activation function was chosen
if self.activation is None:
return r
# tanh
if self.activation == 'tanh':
return np.tanh(r)
# sigmoid
if self.activation == 'sigmoid':
return 1 / (1 + np.exp(-r))
return r
def apply_activation_derivative(self, r):
"""
Applies the derivative of the activation function (if any).
:param r: The normal value.
:return: The "derived" value.
"""
# We use 'r' directly here because its already activated, the only values that
# are used in this function are the last activations that were saved.
if self.activation is None:
return r
if self.activation == 'tanh':
return 1 - r ** 2
if self.activation == 'sigmoid':
return r * (1 - r)
return r
class NeuralNetwork:
"""
Represents a neural network.
"""
def __init__(self):
self._layers = []
def add_layer(self, layer):
"""
Adds a layer to the neural network.
:param Layer layer: The layer to add.
"""
self._layers.append(layer)
def feed_forward(self, X):
"""
Feed forward the input through the layers.
:param X: The input values.
:return: The result.
"""
for layer in self._layers:
X = layer.activate(X)
return X
def predict(self, X):
"""
Predicts a class (or classes).
:param X: The input values.
:return: The predictions.
"""
ff = self.feed_forward(X)
# One row
if ff.ndim == 1:
return np.argmax(ff)
# Multiple rows
return np.argmax(ff, axis=1)
def backpropagation(self, X, y, learning_rate):
"""
Performs the backward propagation algorithm and updates the layers weights.
:param X: The input values.
:param y: The target values.
:param float learning_rate: The learning rate (between 0 and 1).
"""
# Feed forward for the output
output = self.feed_forward(X)
# Loop over the layers backward
for i in reversed(range(len(self._layers))):
layer = self._layers[i]
# If this is the output layer
if layer == self._layers[-1]:
layer.error = y - output
# The output = layer.last_activation in this case
layer.delta = layer.error * layer.apply_activation_derivative(output)
else:
next_layer = self._layers[i + 1]
layer.error = np.dot(next_layer.weights, next_layer.delta)
layer.delta = layer.error * layer.apply_activation_derivative(layer.last_activation)
# Update the weights
for i in range(len(self._layers)):
layer = self._layers[i]
# The input is either the previous layers output or X itself (for the first hidden layer)
input_to_use = np.atleast_2d(X if i == 0 else self._layers[i - 1].last_activation)
layer.weights += layer.delta * input_to_use.T * learning_rate
def train(self, X, y, learning_rate, max_epochs):
"""
Trains the neural network using backpropagation.
:param X: The input values.
:param y: The target values.
:param float learning_rate: The learning rate (between 0 and 1).
:param int max_epochs: The maximum number of epochs (cycles).
:return: The list of calculated MSE errors.
"""
mses = []
for i in range(max_epochs):
for j in range(len(X)):
self.backpropagation(X[j], y[j], learning_rate)
if i % 10 == 0:
mse = np.mean(np.square(y - nn.feed_forward(X)))
mses.append(mse)
print('Epoch: #%s, MSE: %f' % (i, float(mse)))
return mses
@staticmethod
def accuracy(y_pred, y_true):
"""
Calculates the accuracy between the predicted labels and true labels.
:param y_pred: The predicted labels.
:param y_true: The true labels.
:return: The calculated accuracy.
"""
return (y_pred == y_true).mean()
def my_func(x1, x2):
return [math.sin(2*x1+x2)]
n = 40
np.random.seed(4)
x1_low, x1_up = -3, 3
x2_low, x2_up = -1, 3
x1s = np.random.uniform(x1_low, x1_up, size=n)
x2s = np.random.uniform(x2_low, x2_up, size=n)
Xs = []
for _x1 in x1s:
for _x2 in x2s:
Xs.append([_x1, _x2])
Zs = [my_func(_x1, _x2) for _x1, _x2 in Xs]
# Define test data
x1_pred = np.random.uniform(x1_low, x1_up, size=n)
x2_pred = np.random.uniform(x2_low, x2_up, size=n)
Xs_pred = []
for _x1, _x2 in zip(x1_pred, x2_pred):
Xs_pred.append([_x1, _x2])
actual_ys = [my_func(_x1, _x2) for _x1, _x2 in Xs_pred]
# Train and test neural network
alpha = 0.001
nn = NeuralNetwork()
nn.add_layer(Layer(2, 5, 'tanh'))
nn.add_layer(Layer(5, 1, 'tanh'))
errors = nn.train(Xs, Zs, alpha, 300)
print('Accuracy: %.2f%%' % (nn.accuracy(nn.predict(Xs_pred), actual_ys) * 100))
```
|
115374
|
1
|
115378
| null |
1
|
54
|
I am building a classification model based on some machine performance data. Unfortunately for me, it seems to over-fit no matter what I change. The dataset is quite large so I'll share the final feature importance & cross validation scores after feature selection.
```
#preparing the data
X = df.drop('target', axis='columns')
y = df['target']
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=0.75, random_state=10, stratify=y)
```
[](https://i.stack.imgur.com/JmxoA.png)
I then cross validate as follows;
```
logreg=LogisticRegression()
kf=KFold(n_splits=25)
score=cross_val_score(logreg,X,y,cv=kf)
print("Cross Validation Scores: {}".format(score))
print("Average Cross Validation score : {}".format(score.mean()))
```
Here are the results that I get:
```
> Cross Validation Scores [1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 1. 0.94814175 1. 1. 1. 1. 1. 1.]
> Average Cross Validation score : 0.9979256698357821
```
When I run RandomForests, the accuracy is 100%. What could be the problem?
PS. The classes were imbalanced so I "randomly" under-sampled the majority class.
UPDATE:
I overcame this challenge by eliminating some features from the final dataset. I retrained my models using a few features at a time and was able to find out the ones that caused the "over-fitting". In short, better feature selection did the trick.
|
Why is my model overfitting?
|
CC BY-SA 4.0
| null |
2022-10-19T12:22:36.447
|
2022-10-29T19:12:23.807
|
2022-10-29T19:12:23.807
|
141739
|
141739
|
[
"classification",
"class-imbalance",
"overfitting"
] |
This isn't overfitting. You're reporting cross-validation scores as very high (and are not reporting training set scores, which are presumably also very high); your model is just performing very well (on unseen data).
That said, you should be asking yourself if something is wrong. There are two common culprits that come to mind:
- One of your features is very informative, but wouldn't be available at prediction time ("future information", or in the extreme case, you accidentally left the target variable in the independent variable dataframe)
- Your train-test splits don't respect some grouping (in the extreme case, rows of the frame are repeated and show up in both training and test folds).
Otherwise, it's entirely possible your problem is just easily solved by your model.
See also
[Why does my model produce too good to be true output?](https://datascience.stackexchange.com/q/84567/55122)
[Quote on too good to be true model performance?](https://stats.stackexchange.com/q/562808/232706)
|
Is my model underfitting?
|
I don't think you need to worry, instead I would ask myself if the accuracy I'm getting is good enough for the task that the NN is supposed to do.
Having higher training loss than validation loss can mean different things:
- Your validation data is easier to assess than training data. If the train/validation split is done randomly and there is enough data in both subsets, this shouldn't be the case.
- You're using dropout in training but not in validation. This is the default of some deep learning libraries, and it makes sense. If this is the case and you want to see less of a gap, try to reduce the amount of dropout rate and you'll see less of a gap.
To sum up, I don't think it's an issue but you might be able to improve your validation performance by reducing the amount of regularization or increasing the complexity of the NN. However, this is just a hypothesis and the only way to know is to re-train and check the new performance.
---
### Edit
By default, [keras doesn't do dropout in prediction](https://stackoverflow.com/questions/47787011/how-to-disable-dropout-while-prediction-in-keras) so this is likely your case since you have high dropout rates.
|
115395
|
1
|
115398
| null |
0
|
25
|
I am attempting to formulate my own activation function. However, I'm new to neural networks, am not yet ready to test it, but would want to know if I already landed on a better activation function than my benchmark before pushing through for a successful study.
These are their graphs. Mine is in green and the benchmark is in purple.[](https://i.stack.imgur.com/12mCd.jpg)
Is it possible to tell which one is better based on these graphs?
Thanks!
|
Is it possible to tell if one activation function is better than the other one based on their graphs?
|
CC BY-SA 4.0
| null |
2022-10-19T20:25:11.100
|
2022-10-19T20:49:27.183
| null | null |
141753
|
[
"neural-network",
"activation-function"
] |
tl;dr No.
The choice of an activation function is highly dependent on the task at hand, so there isn't necessarily a "better" in the general sense, let alone a signal you could get from a chart.
The chart also doesn't tell you if the activation or its derivative is easy/inexpensive to compute relative to the other, which can be a consideration.
|
Difference of Activation Functions in Neural Networks in general
|
A similar question was asked on CV: [Comprehensive list of activation functions in neural networks with pros/cons](https://stats.stackexchange.com/q/115258/12359).
I copy below one of the answers:
>
One such a list, though not much exhaustive:
http://cs231n.github.io/neural-networks-1/
Commonly used activation functions
Every activation function (or non-linearity) takes a single number
and performs a certain fixed mathematical operation on it. There are
several activation functions you may encounter in practice:
[](https://i.stack.imgur.com/UOzlm.png)[](https://i.stack.imgur.com/Mg9s9.png)
>
Left: Sigmoid non-linearity
squashes real numbers to range between [0,1] Right: The tanh
non-linearity squashes real numbers to range between [-1,1].
Sigmoid. The sigmoid non-linearity has the mathematical form $\sigma(x) = 1 / (1 + e^{-x})$ and is shown in the image above on
the left. As alluded to in the previous section, it takes a
real-valued number and "squashes" it into range between 0 and 1. In
particular, large negative numbers become 0 and large positive numbers
become 1. The sigmoid function has seen frequent use historically
since it has a nice interpretation as the firing rate of a neuron:
from not firing at all (0) to fully-saturated firing at an assumed
maximum frequency (1). In practice, the sigmoid non-linearity has
recently fallen out of favor and it is rarely ever used. It has two
major drawbacks:
Sigmoids saturate and kill gradients. A very undesirable property of the sigmoid neuron is that when the neuron's activation
saturates at either tail of 0 or 1, the gradient at these regions is
almost zero. Recall that during backpropagation, this (local) gradient
will be multiplied to the gradient of this gate's output for the whole
objective. Therefore, if the local gradient is very small, it will
effectively "kill" the gradient and almost no signal will flow through
the neuron to its weights and recursively to its data. Additionally,
one must pay extra caution when initializing the weights of sigmoid
neurons to prevent saturation. For example, if the initial weights are
too large then most neurons would become saturated and the network
will barely learn.
Sigmoid outputs are not zero-centered. This is undesirable since neurons in later layers of processing in a Neural Network (more on
this soon) would be receiving data that is not zero-centered. This has
implications on the dynamics during gradient descent, because if the
data coming into a neuron is always positive (e.g. $x > 0$
elementwise in $f = w^Tx + b$)), then the gradient on the weights
$w$ will during backpropagation become either all be positive, or
all negative (depending on the gradient of the whole expression
$f$). This could introduce undesirable zig-zagging dynamics in the
gradient updates for the weights. However, notice that once these
gradients are added up across a batch of data the final update for the
weights can have variable signs, somewhat mitigating this issue.
Therefore, this is an inconvenience but it has less severe
consequences compared to the saturated activation problem above.
Tanh. The tanh non-linearity is shown on the image above on the right. It squashes a real-valued number to the range [-1, 1]. Like the
sigmoid neuron, its activations saturate, but unlike the sigmoid
neuron its output is zero-centered. Therefore, in practice the tanh
non-linearity is always preferred to the sigmoid nonlinearity. Also
note that the tanh neuron is simply a scaled sigmoid neuron, in
particular the following holds: $ \tanh(x) = 2 \sigma(2x) -1 $.
Left: Rectified Linear
Unit (ReLU) activation function, which is zero when x < 0 and then
linear with slope 1 when x > 0. Right: A plot from Krizhevsky
et al. (pdf) paper indicating the 6x improvement in convergence
with the ReLU unit compared to the tanh unit.
ReLU. The Rectified Linear Unit has become very popular in the last few years. It computes the function $f(x) = \max(0, x)$. In
other words, the activation is simply thresholded at zero (see image
above on the left). There are several pros and cons to using the
ReLUs:
(+) It was found to greatly accelerate (e.g. a factor of 6 in Krizhevsky et
al.) the
convergence of stochastic gradient descent compared to the
sigmoid/tanh functions. It is argued that this is due to its linear,
non-saturating form.
(+) Compared to tanh/sigmoid neurons that involve expensive operations (exponentials, etc.), the ReLU can be implemented by simply
thresholding a matrix of activations at zero.
(-) Unfortunately, ReLU units can be fragile during training and can "die". For example, a large gradient flowing through a ReLU neuron
could cause the weights to update in such a way that the neuron will
never activate on any datapoint again. If this happens, then the
gradient flowing through the unit will forever be zero from that point
on. That is, the ReLU units can irreversibly die during training since
they can get knocked off the data manifold. For example, you may find
that as much as 40% of your network can be "dead" (i.e. neurons that
never activate across the entire training dataset) if the learning
rate is set too high. With a proper setting of the learning rate this
is less frequently an issue.
Leaky ReLU. Leaky ReLUs are one attempt to fix the "dying ReLU" problem. Instead of the function being zero when x < 0, a leaky ReLU will instead have a small negative slope (of 0.01, or so). That is, the function computes $f(x) = \mathbb{1}(x < 0) (\alpha x) + \mathbb{1}(x>=0) (x) $ where $\alpha$ is a small constant. Some people report success with this form of activation function, but the results are not always consistent. The slope in the negative region can also be made into a parameter of each neuron, as seen in PReLU neurons, introduced in Delving Deep into Rectifiers, by Kaiming He et al., 2015. However, the consistency of the benefit across tasks is presently unclear.
[](https://i.stack.imgur.com/1BX7l.png)
>
Maxout. Other types of units have been proposed that do not have the functional form $f(w^Tx + b)$ where a non-linearity is applied
on the dot product between the weights and the data. One relatively
popular choice is the Maxout neuron (introduced recently by
Goodfellow et
al.) that
generalizes the ReLU and its leaky version. The Maxout neuron computes
the function $\max(w_1^Tx+b_1, w_2^Tx + b_2)$. Notice that both
ReLU and Leaky ReLU are a special case of this form (for example, for
ReLU we have $w_1, b_1 = 0$). The Maxout neuron therefore enjoys
all the benefits of a ReLU unit (linear regime of operation, no
saturation) and does not have its drawbacks (dying ReLU). However,
unlike the ReLU neurons it doubles the number of parameters for every
single neuron, leading to a high total number of parameters.
This concludes our discussion of the most common types of neurons and
their activation functions. As a last comment, it is very rare to mix
and match different types of neurons in the same network, even though
there is no fundamental problem with doing so.
TLDR: "What neuron type should I use?" Use the ReLU non-linearity, be careful with your learning rates and possibly
monitor the fraction of "dead" units in a network. If this concerns
you, give Leaky ReLU or Maxout a try. Never use sigmoid. Try tanh, but
expect it to work worse than ReLU/Maxout.
---
>
License:
The MIT License (MIT)
Copyright (c) 2015 Andrej Karpathy
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
"Software"), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be
included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.*
|
115401
|
1
|
115405
| null |
0
|
34
|
A model is trained to predict the median temperature of Boston. The resulting model works well according to their validation data. However, this model performs poorly when used to predict the temperature of Washington. Explain the reason and suggest a way of training a better model for Washington data.
I think these two datasets are not identically distributed, so the model obtained on one dataset is not generalizable to another dataset. The solution is that we should merge these two datasets and then do cross validation and train the model. Is it true?
|
Why an already trained model is not generalizable to another related dataset?
|
CC BY-SA 4.0
| null |
2022-10-20T01:14:48.833
|
2022-10-20T06:33:48.550
| null | null |
26019
|
[
"machine-learning",
"data-mining",
"generalization"
] |
This should be due to the fact that ML Models fails to give optimal results when distribution of data changes i.e. Data is not identically distributed. To solve this problem yes the best approach would be to merge the data and create a model on data from both states. This will make sure that your training distribution aligns with test/ real world data distribution hence improving model performance
|
Using a model for a different dataset
|
I think you answered yourself by "values of B are not comparable". Learning for predictions is based on a fundamental assumption which is the data for prediction has the same joint distribution as the data for learning. This is the link between those processes.
Now, if you want to handle that in a meaningful way you have to know somehow the source type. In your example the device type. One way would be to introduce the device type as a different column in your data set, so that the model can have the chance to differentiate between source type. Obviously you have to have training data for all the device types. Supposing you have 2 device types, A and B. Your training data should have some columns for the signal and also a factor column with type A and B. Also you have to have enough instances of data with type A and type B.
|
115416
|
1
|
115445
| null |
2
|
88
|
I have a problem. I am currently looking at a classifier and I would like to examine this using an ROC curve as a metric. However, questions have arisen to which I can not find an answer.
A ROC curve describes the following
>
ROC curves are frequently used to show in a graphical way the
connection/trade-off between clinical sensitivity and specificity for
every possible cut-off for a test or a combination of tests. In
addition the area under the ROC curve gives an idea about the benefit
of using the test(s) in question.
- Why does an ROC curve become a curve in the first place?
- Why does TP (True Positive) and FP (False Positive) rate change?
- And why does the ratio vary?
[](https://i.stack.imgur.com/ZwvRA.png)
|
What makes an ROC curve a curve and why do the values change?
|
CC BY-SA 4.0
| null |
2022-10-20T13:35:44.360
|
2022-10-21T10:19:06.037
| null | null |
130860
|
[
"machine-learning",
"classification",
"metric",
"roc"
] |
ROC curve is a parametric one. Each point has a respective third coordinate (classification threshold).
The `.predict_proba()` method of sklearn models returns the class scores (the measure of a model's certainty of the prediction, or probability for well-calibrated models).
By default, sklearn `.predict()` method predicts the class by comparing this score to 0.5 threshold.
Neural network classifiers often similarly decide on the class by applying argmax to the scores.
If we operate on scores directly however, we can try as much possible thresholds as there are unique score values (plus a zero one, which would classify everything as 1).
Each possible threshold would yield a different confusion matrix. We start drawing the curve from the upper right (zero threshold), where most observations are classified as positive: that means perfect recall but also a high false positive rate. Towards the bottom left, more obervations are classified as negative: recall decreases, but so do false positives.
A decent classifier with respect to this metric, obviously, should yield high recall and low FPR at at least some threshold.
|
ROC curve interpretation
|
If you hear from the area under the curve (AUC), you can find that the first classifier is better as AUC of the first curve is more than the second classifier. To know more about AUC you can find [this post](https://stats.stackexchange.com/a/132832/144441) useful.
|
115429
|
1
|
120944
| null |
2
|
35
|
I read on [https://towardsdatascience.com/choosing-the-right-language-model-for-your-nlp-use-case-1288ef3c4929](https://towardsdatascience.com/choosing-the-right-language-model-for-your-nlp-use-case-1288ef3c4929):
[](https://i.stack.imgur.com/6cI3W.png)
I find the curve for T5 to be particularly interesting. What explains its recent resurgence?
|
What explains T5's recent resurgence?
|
CC BY-SA 4.0
| null |
2022-10-20T19:35:43.733
|
2023-04-16T01:54:57.117
| null | null |
843
|
[
"nlp",
"language-model",
"social-network-analysis"
] |
I don't see FLAN-T5 in the list as well as the other T5 variants, so my guess is that all T5 variants got conflated into T5. Since fine-tuning (eg, FLAN-T5) has become very important in language models, this likely explains T5's recent resurgence according to the plot in the question details.
|
Lagged Features
|
Lagged values of features make sense with time series data, this is usually fundamental in time series analysis (because of autocorrelation). Now, whether you should include a lag or not of a feature is a different question, one that is very much data and model dependent, so we cannot answer this definitively.
One thing you might check is the aforementioned autocorrelation, if a feature has autocorrelation then maybe you should include lags, if however there is no autocorrelation then lags are probably useless.
|
115440
|
1
|
115457
| null |
1
|
36
|
I have a problem. I have a NLP classification problem.
There are different methods to decompose sentences into tokens, for example in whole words or in characters. Then there are different tokenizers like:
- TF-IDF
- Binary
- Frequency
- Count
My question now aims, why should one make the effort and use a different word division (word or character) and then check this with the different tokenizers?
|
Why is it useful to use different word splitting with different tokenizers?
|
CC BY-SA 4.0
| null |
2022-10-21T08:15:27.950
|
2022-10-21T19:17:52.610
| null | null |
130860
|
[
"nlp",
"tokenization"
] |
- It's rare to represent sentences as sequences of characters, since most NLP tasks are related to the the semantics of the sentence, which is expressed by the sequence of words. A notable exception: stylometry tasks, i.e. tasks where the style of the text/author matters more than the topic/meaning, sometimes rely on sequences of characters.
- Yes, the question of tokenization can indeed have an impact of the performance of the target task. But modern methods use good word tokenizers trained on large corpora, not simplifed whitespace-based tokenizers. There can still be differences between tokenizers though.
- There are even more text representations methods than listed here (embeddings are an important one). And yes, these also have a huge impact on performance.
For all these different options (and others), the reason why it's often worth testing different variants is clear: it affects performance and it's not always clear which one is the best without trying, so one must evaluate the different options. Btw it's crucial to precisely define how the target task is evaluated first, otherwise one just subjectively interprets results.
Basically imho this is a matter of proper data-driven methodology. Of course experience and intuition also play a role, especially if there are time or resources constrains.
|
NLP: what are the advantages of using a subword tokenizer as opposed to the standard word tokenizer?
|
Subword tokenization is the norm nowadays in NLP models because:
- It mostly avoids the out-of-vocabulary (OOV) word problem. Word vocabularies cannot handle words that are not in the training data. This is a problem for morphologically-rich languages, proper nouns, etc. Subword vocabularies allow representing these words. By having subword tokens (and ensuring the individual characters are part of the subword vocabulary), makes it possible to encode words that were not even in the training data. There's still the problem with characters not present in the training data, but that's tolerable in most of the cases.
- It gives manageable vocabulary sizes. Current neural networks need a pre-defined closed discrete token vocabulary. The vocabulary size that a neural network can handle is far smaller than the number of different words (surface forms) in most normal languages, especially morphologically-rich ones (and especially agglutinative ones).
- Mitigates data sparsity. In a word-based vocabulary, low-frequency words may appear very few times in the training data. This is especially troublesome for agglutinative languages, where a surface form may be the result of concatenating multiple affixes. Using subword tokenization allows token reusing, and increases the frequency of their appearance.
- Neural networks perform very well with them. In all sorts of tasks, they excel: neural machine translation, NER, etc, you name it, the state of the art models are subword-based: BERT, GPT-3, Electra,...
|
115455
|
1
|
115464
| null |
1
|
113
|
I have this time series below, that I divided into train, val and test:
[](https://i.stack.imgur.com/kx2Q6.png)
Basically, I trained an ARIMA and an LSTM on those data, and results are completely different, in terms of prediction:
ARIMA:
[](https://i.stack.imgur.com/aVUwl.png)
LSTM:
[](https://i.stack.imgur.com/3ddRU.png)
Now, maybe I am passing, in some way, the test set to LSTM in order to perform better? Or LSTM is simply (lot) better than ARIMA?
Below there is some code. Note that in order to do prediction in future days, I am adding the new and last predicted value to my series, before training and predicting:
ARIMA code:
```
# Create list of x train valuess
history = [x for x in x_train]
# establish list for predictions
model_predictions = []
# Count number of test data points
N_test_observations = len(x_test)
# loop through every data point
for time_point in list(x_test.index[-N_test_observations:]):
model = sm.tsa.arima.ARIMA(history, order=(3,1,3), seasonal_order=(0,0,0,7))
model_fit = model.fit()
output = model_fit.forecast()
yhat = output[0]
model_predictions.append(yhat)
true_test_value = x_test[time_point]
#history.append(true_test_value)
history.append(yhat)
MAE_error = mean_absolute_error(x_test, model_predictions)
print('Testing Mean Squared Error is {}'.format(MAE_error))
Testing Mean Squared Error is 86.71141520892097
```
LSTM code:
```
def sequential_window_dataset(series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=window_size, drop_remainder=True)
ds = ds.flat_map(lambda window: window.batch(window_size + 1))
ds = ds.map(lambda window: (window[:-1], window[1:]))
return ds.batch(1).prefetch(1)
# reset any stored data
keras.backend.clear_session()
tf.random.set_seed(42)
np.random.seed(42)
# set window size and create input batch sequence
window_size = 30
train_set = sequential_window_dataset(normalized_x_train, window_size)
valid_set = sequential_window_dataset(normalized_x_valid, window_size)
# create model
model = keras.models.Sequential([
keras.layers.LSTM(100, return_sequences=True, stateful=True,
batch_input_shape=[1, None, 1]),
keras.layers.LSTM(100, return_sequences=True, stateful=True),
keras.layers.Dense(1),
])
# set optimizer
optimizer = keras.optimizers.Nadam(lr=0.00033)
# compile model
model.compile(loss=keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
# reset states
reset_states = ResetStatesCallback()
#set up save best only checkpoint
model_checkpoint = keras.callbacks.ModelCheckpoint(
"my_checkpoint", save_best_only=True)
early_stopping = keras.callbacks.EarlyStopping(patience=50)
# fit model
model.fit(train_set, epochs=500,
validation_data=valid_set,
callbacks=[early_stopping, model_checkpoint, reset_states])
# recall best model
model = keras.models.load_model("my_checkpoint")
# make predictions
rnn_forecast = model.predict(normalized_x_test[np.newaxis,:])
rnn_forecast = rnn_forecast.flatten()
# Example of how to iverse
rnn_unscaled_forecast = x_train_scaler.inverse_transform(rnn_forecast.reshape(-1,1)).flatten()
rnn_unscaled_forecast.shape
'LSTM': 9.964744041030935
```
Maybe there is something with that window size of the LSTM? Or maybe something when I do predictions for LSTM? `# make predictions rnn_forecast = model.predict(normalized_x_test[np.newaxis,:])`
|
Why so discrepancy between ARIMA and LSTM in time series forecasting?
|
CC BY-SA 4.0
| null |
2022-10-21T15:26:42.140
|
2022-10-24T13:23:52.780
| null | null |
109836
|
[
"deep-learning",
"time-series",
"lstm",
"forecasting",
"arima"
] |
Arima and LSTM are very different and there could be some tips to improve results.
Have you tried relative values instead of raw values?
For instance:
```
#Raw values:
raw=[1200, 1300, 1250, 1370]
#Relative (or differential) values:
diff=[+100,-50,+120]
```
Sometimes, raw values like 1400 could alter the results for ARIMA and LSTM differently.
On the other hand, LSTM could have bad predictions with noisy data. Some smoothing could improve results, but it depends on the kind of data.
Finally, are you trying to forecast 30 days in a single shot? Most predictions focus on 1-day forecast and set their precision on the sequential results from one day to another on the 30 days of validation data.
If your aim is to get accurate long-term forecasting, ARIMA and LSTM might not be the best solutions (overall ARIMA), because they have their own structural limitations. This could explain also why LSTM results have a gap with real results: some intern mechanisms have limited memory and wrongly predict important decreases or decreases of values.
The shape result of LSTM seems correct, but there is a small shift in Y of 10 because it initially predicted a smaller decrease. LSTM is quite difficult to understand: all I can say is that weights are connected to each other and peaks are more difficult to predict because of those dependencies. I recommend reading the initial paper, it's very interesting:
[https://www.researchgate.net/publication/13853244_Long_Short-term_Memory](https://www.researchgate.net/publication/13853244_Long_Short-term_Memory)
My advice is to lose accuracy by grouping values (ex: make a prediction of weeks instead of days) or use long-term models like those ones:
[https://towardsdatascience.com/extreme-event-forecasting-with-lstm-autoencoders-297492485037](https://towardsdatascience.com/extreme-event-forecasting-with-lstm-autoencoders-297492485037)
[https://thuijskens.github.io/2016/08/03/time-series-forecasting/](https://thuijskens.github.io/2016/08/03/time-series-forecasting/)
[https://arxiv.org/pdf/2210.08244.pdf](https://arxiv.org/pdf/2210.08244.pdf)
|
Why is my prediction using ARIMA better if I'm using less historic data?
|
Electricity prices are essentially the same as stock prices: best modelled by a random walk, where the best prediction for tomorrow is the price today. Therefore I am not really surprised that you get worse results using more historical data.
Some versions of ARIMA will also include regularisation, which will punish your model for including more and more data - inclusion of new data must be justified by contributing to a lower residual error to be "worth" inclusion.
Other models that tend to be a little more robust use features other than the actual target, here the price. For example, trying to predict the volatility of the price might prove to be more accurate. For this there are GARCH models (Generalised AutoRegressive Conditional Heteroskedasticity).
Another thing you might consider is to include external data... for example, electricity consumption is heavily influenced by the weather - if it is cold outside, a lot of people heat their homes using electrical heaters, they also drink more hot drinks etc.
|
115524
|
1
|
115558
| null |
0
|
254
|
I have a data that looks like this:
[](https://i.stack.imgur.com/6evng.png)
The T2M indicates the temperature, and the followed row is the year. I want to append all the similar parameters columns under a single column having all the years, I will end up with one T2M column only, and the final dataframe would look like this
```
Parameter | T2M | ...
Year | 1981 | ...
Jan
Feb
.
.
Year | 1982 | ...
.
.
.
```
I tried the following but it doesn't work:
```
dff = df.copy()
temp = df.iloc[:,1]
dff.append(temp)
```
I get this error :
```
ValueError: cannot reindex from a duplicate axis
```
which doesn't make sense because [here](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.append.html) in the first example similar indices were used.
|
Append Existing Columns to another Column in Pandas Dataframe
|
CC BY-SA 4.0
| null |
2022-10-24T07:59:25.263
|
2022-10-25T05:05:01.463
| null | null |
54586
|
[
"pandas",
"data-cleaning",
"dataframe"
] |
Ok, I figured out the problem. The duplicate axis error was coming because the dataframe has multiple columns with the name 'T2M' so `append()` could not figure to which column it would append the new values.
Instead, I copied the dataframe, in the copy I deleted all columns to be appended, and extracted the data from the original df to the copied one. Since in the copy all columns are unique, everything went fine.
|
how to create new columns in pandas using some rows of existing columns?
|
Instead of perhaps iterating of each row and filling the gaps as required, I would suggest trying to do it via indexing. The solution is:
```
df['category'] = df.where(~df.id.isnull())['item'].ffill()
```
---
Here I break down my solution to help you understand why it works.
Imagine your dataframe is called `df`. I created a small version of yours as follows:
```
In [1]: import pandas as pd
In [2]: df = pd.DataFrame.from_dict(
{'id': [1, None, None, 2, None, None, 3, None, None],
'item': ['CAPITAL FUND', 'A', 'B', 'BORROWINGS', 'A', 'B', 'DEPOSITS', 'A', 'B']})
In [3]: df # see what it looks like
Out[3]:
id item
0 1.0 CAPITAL FUND
1 NaN A
2 NaN B
3 2.0 BORROWINGS
4 NaN A
5 NaN B
6 3.0 DEPOSITS
7 NaN A
8 NaN B
```
I get the dataframe back where the `id` column is not null (`~` reverses the `isnull()`). On the resulting dataframe, I take only the `item` column (using `[item]`) and then fill the missing gaps, using the previous valid value in that column.
```
In [4]: df['category'] = df.where(~df.id.isnull())['item'].ffill()
In [5]: df
Out[5]:
id item category
0 1.0 CAPITAL FUND CAPITAL FUND
1 NaN A CAPITAL FUND
2 NaN B CAPITAL FUND
3 2.0 BORROWINGS BORROWINGS
4 NaN A BORROWINGS
5 NaN B BORROWINGS
6 3.0 DEPOSITS DEPOSITS
7 NaN A DEPOSITS
8 NaN B DEPOSITS
```
The trick is to understand this part: `df.where(~df.id.isnull())['item']`
It returns really the whole dataframe, with the values where `~df.id.isnull()` is `True`. Then only the `item` dataframe. The result is this:
```
In [6]: df.where(~df.id.isnull())['item']
Out[6]:
0 CAPITAL FUND
1 NaN
2 NaN
3 BORROWINGS
4 NaN
5 NaN
6 DEPOSITS
7 NaN
8 NaN
```
Now it should be clear why the final `.ffill()` works as we would like. It forward fills the missing values, using the last known valid value.
|
115553
|
1
|
115557
| null |
0
|
21
|
I am quite new to some concepts of machine learning and having hard time understanding the following.
Suppose I have a supervised classifier (random forest) trained with a dataset with several features.
Do the features in the test dataset need to have values that are somewhat similar (or closer) to the training data (or in the same domain).
For example, take training data record: <'label A', 12, 23, 3412, 65> (assume other 'label A' types are similar to this, with only +-10 difference for each feature), test data: X: <10, 21, 3000, 80> and Y: <0.12. 0.23, 34.12, 0.65>.
Out of X and Y, which has a higher chance of being classified as type 'label A'?
Please make a note of any assumptions you make.
|
Does it help to have similar values for features in train and test data to make accurate predictions?
|
CC BY-SA 4.0
| null |
2022-10-24T22:11:21.250
|
2022-10-25T03:12:15.150
| null | null |
130442
|
[
"machine-learning",
"random-forest",
"supervised-learning"
] |
This is both a simple and complex concept and one that we are always concerned with while building models.
The short answer is, the data in your training and test sets needs to be randomly selected, and accordingly you have no control over the range and variation in either set relative to a fixed amount of modeling data.
The long answer is that any model will interpolate better than it will extrapolate because it is easier to describe what is known by the model. So, if there are a lot of values outside of the training range, it will for sure affect the predictive capabilities of your resulting model.
The nuances of this depend on a lot of things, how much data you have, how divergent the training and test variables are, what specific model tuning parameters you use how much the outcome variable varies with its predictors, why and how.
But ultimately the training set should be large enough and varied enough such that it captures all of the variance in the outcome variable you hope to predict. This more than likely means that your features should be fully described at all possible values they might take.
|
Different number of features in train vs test
|
You could concatenate your train and test datasets, crete dummy variables and then separate them dataset.
Something like this:
```
train_objs_num = len(train)
dataset = pd.concat(objs=[train, test], axis=0)
dataset = pd.get_dummies(dataset)
train = copy.copy(dataset[:train_objs_num])
test = copy.copy(dataset[train_objs_num:])
```
|
115559
|
1
|
115576
| null |
0
|
23
|
I have a dataset with calls from day 1 to day 340. What model can I fit to mathematically capture the pattern?
There are only 1 or 2 digit number of calls on all days except day 61.62.63 and 121.122.123 and 170 days when there are 3-4 digit number of calls
|
What model to fit to call center data
|
CC BY-SA 4.0
| null |
2022-10-25T05:06:18.410
|
2022-10-25T12:52:22.133
| null | null |
63933
|
[
"machine-learning",
"prediction",
"forecasting"
] |
I don't think you are going to be able to capture this pattern unless you include some kind of information around those 3 blow up periods. Were these holidays, new product release, stimulus checks, govt requirements, etc? Best make a variable to capture that. If you don't know why you had a sudden increase of 2 to 3 magnitudes, you should probably understand your data better.
Depending on your needs, a solid line graph with some well placed text and color might get you just as far as a fancy model. Eg, 98% of days we had roughly X calls, but during Holidays, calls increase Y%. Please consider temp hiring during holidays to solve this problem.
|
What kind of model should I use?
|
Use an unsupervised method such as clustering to group users, then assign marketing campaigns that have been used by others within the same cluster.
|
115560
|
1
|
115564
| null |
0
|
49
|
I have a dataframe where there are columns which have exponent in their values(strings/words).
Example
[](https://i.stack.imgur.com/rua30.png)
[](https://i.stack.imgur.com/kuaVZ.png)
Pandas reads them as Rasha, Fatiguec,Pyrexiab. is there any way i can make it read properly. or even some other way to remove the exponent in those words
UPDATE: I found the solution under this post [https://stackoverflow.com/questions/64309887/pandas-read-html-ignore-superscripts-and-subscripts](https://stackoverflow.com/questions/64309887/pandas-read-html-ignore-superscripts-and-subscripts)
|
ignore exponent of a word in dataframe
|
CC BY-SA 4.0
| null |
2022-10-25T05:57:25.010
|
2022-10-27T04:43:03.563
|
2022-10-27T04:43:03.563
|
130902
|
130902
|
[
"pandas",
"data-cleaning"
] |
I think you have to manually exclude those superscript letters aferwards - possibly even directly via the underlying code. Take a look at the wikipedia page regarding [Unicode Subscripts and Superscripts](https://en.wikipedia.org/wiki/Unicode_subscripts_and_superscripts).
Quick and dirty example code via the particular superscript character:
```
import re
text = "Rashᵃ"
re.sub("(ᵃ)", '', text)
Out[1]: 'Rash'
```
|
Problem extracting words from dataframe
|
Your question is a bit confusing. So, as much as I understood from your examples, for each sample of list_asm, you want to extract the very first word from the string.
The thing you are doing wrong is treating the string as a list. That is, `['uncomisd xmm2, xmm2', 'jp 0x40', ...]` is considered as a string by python, not a list.
Thus, you need to extract the strings from your list first, then you can't take the first words from all these strings.
To achieve that, you can use a regular expression to find all the strings that are inside of quotes `'...'`.
```
import pandas as pd
import re
# Read the file into dataframe
dataFrame = pd.read_json("dataset.json", lines=True)
# First extract the strings the take the first word of each string
dataFrame['opcodes'] = dataFrame['lista_asm'].apply(lambda x: [i.split()[0] for i in re.findall("'([^']*)'", x)])
print(dataFrame)
```
or modular form of the code would be:
```
import pandas as pd
import re
# Function to extract the first words from each string
def extractFirstWord(str):
listOfWords = re.findall("'([^']*)'", str)
return [i.split()[0] for i in listOfWords]
# Read the file into dataframe
dataFrame = pd.read_json("dataset.json", lines=True)
dataFrame['opcodes'] = dataFrame['lista_asm'].apply(lambda x: extractFirstWord(x))
print(dataFrame)
```
The result:
[](https://i.stack.imgur.com/sZ7oR.png)
|
115582
|
1
|
115583
| null |
0
|
27
|
Hi I am wondering when it comes to normalising images across each of the channels, do you use the same scaling factors that is used for training for testing set as well or separate ones.
In traditional ML problems using scikit-learn, the usual procedure is normalise the training data and apply the same scaler for testing data
```
from sklearn.preprocessing import MinMaxScaler
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, shuffle = True)
scaler = MinMaxScaler()
X_train_norm = scaler.fit_transform(X_train)
X_test_norm = scaler.transform(X_test)
```
However when using deep learning I am wondering whether the same procedure is used for image data
For example
```
import torch
from torchvision import transforms
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
# Resize Images to (3,150,150) and convert to torch tensors which gives values between 0 and 1 instead of 0 and 255. So need to divide by 255
train_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
])
test_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
])
train_datasets = ImageFolder(root = "data/dogs-vs-cats/concise_dataset/train", transform = train_transforms)
test_datasets = ImageFolder(root = "data/dogs-vs-cats/concise_dataset/test", transform = test_transforms)
# just a function to get mean of the mean and std for each channel across the entire train and test sets separately
def get_mean_and_std(dataset):
mean_values = torch.zeros(len(dataset),3)
std_values = torch.zeros(len(dataset),3)
for idx, (img, lab) in enumerate(dataset):
mean_values[idx, :] = img.mean(dim = [1,2])
std_values[idx, :] = img.std(dim = [1,2])
print(f"mean of entire dataset : {mean_values.mean(dim = 0)}")
print(f"std of entire dataset : {std_values.mean(dim = 0)}")
get_mean_and_std(train_datasets)
# mean of entire dataset : tensor([0.4854, 0.4515, 0.4143])
# std of entire dataset : tensor([0.2233, 0.2178, 0.2185])
get_mean_and_std(test_datasets)
# mean of entire dataset : tensor([0.4902, 0.4571, 0.4188])
# std of entire dataset : tensor([0.2257, 0.2203, 0.2207])
```
Now apply these means and standard deviations separately for training and testing data.
```
train_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize(mean = [0.4854, 0.4515, 0.4143], std = [0.2233, 0.2178, 0.2185])
])
test_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize(mean = [0.4902, 0.4571, 0.4188], std = [0.2257, 0.2203, 0.2207])
])
```
or should I apply the same mean and std of train set to the test set?
```
test_transforms = transforms.Compose([
transforms.Resize((150,150)),
transforms.ToTensor(),
transforms.Normalize(mean = [0.4854, 0.4515, 0.4143], std = [0.2233, 0.2178, 0.2185])
])
```
|
Normalising Image Data
|
CC BY-SA 4.0
| null |
2022-10-25T16:08:35.483
|
2022-10-25T16:31:38.630
| null | null |
91519
|
[
"pytorch",
"computer-vision",
"normalization"
] |
The method is the same as it is for traditional ML problems, i.e. you need to apply the same mean and standard deviation to the test data as you do for the training data. The mean and standard deviation used are derived from the training data, but depending on the type of problem and data used you can also use the values derived from the ImageNet dataset.
|
Normalising data with multiple methods
|
I've not seen any paper about that but based on what I've faced till now, normalizing data intuitively is just for assigning same importance to different features which their raw values do not have a same range. Take a look at [here](https://datascience.stackexchange.com/a/27617/28175). Also, you can take a look at [here](https://www.coursera.org/learn/machine-learning/lecture/xx3Da/gradient-descent-in-practice-i-feature-scaling) that professor says that you just need to employ a technique and it's not really important which technique. Also, take a look at [here](https://www.coursera.org/learn/machine-learning/supplement/CTA0D/gradient-descent-in-practice-i-feature-scaling).
|
115585
|
1
|
115617
| null |
0
|
305
|
I'm training a machine learning model using [YOLOv5 from Ultralytics](https://github.com/ultralytics/yolov5) (arch: YOLOv5s6).
The task is to detect and identify laundry symbols.
For that, I've scraped and labeled 600 images from Google.
Using this dataset, I receive a result with an mAP around 0.6.
But 600 images is a tiny dataset and there are multiple laundry symbols where I have only 1-4 images for training and symbols where I have 100 and more.
So I started writing a Python script which generates more images of laundry symbols.
The script basically takes a background image and adds randomly positioned 1-10 laundry symbols in different colors and rotations. No background is used twice.
With that script, I generated around 6.000 entirely different images with laundry symbols that every laundry symbol is at least 800 times in the dataset.
Here are examples of the generated data:
[](https://i.stack.imgur.com/jaQmF.jpg)
[](https://i.stack.imgur.com/OTZId.jpg)
I combined the scraped and the generated dataset and retrained the model with the same configuration. The result is really bad: the mAP dropped to 0.15 and the model overfits. The confusion matrix told me why:
[](https://i.stack.imgur.com/cBJSF.png)
## Why is the model learning the background instead the objects?
First I thought my annotation might be wrong, but the training script from Ultralytics saves a few examples of training batch images - there the boxes are drawn perfectly around the generated symbols.
For completeness, below are more analytics added about the training:
## More analytics
Labels
[](https://i.stack.imgur.com/vrMV1.jpg)
Curves
[](https://i.stack.imgur.com/3rKKz.png)
More examples from the dataset
[](https://i.stack.imgur.com/IunEF.jpg)
|
What can I do when my object detection model learns background images instead the objects?
|
CC BY-SA 4.0
| null |
2022-10-25T16:37:06.147
|
2022-10-26T15:03:26.570
| null | null |
88913
|
[
"machine-learning",
"dataset",
"machine-learning-model",
"computer-vision",
"object-detection"
] |
I asked the same question on Reddit and got a few replies. The main point why my model is not performing on synthetic data, is because YOLO is looking at the whole picture and tries to learn the context and not only the patterns of laundry symbols. The background is just too random for YOLO.
A Reddit user even created a video about this, explaining it using a card game: [https://www.youtube.com/watch?v=auEvX0nO-kw](https://www.youtube.com/watch?v=auEvX0nO-kw)
Referenced Reddit posts:
- https://www.reddit.com/r/MachineLearning/comments/ydc9n1/p_object_detection_model_learns_backgrounds_and/
- https://www.reddit.com/r/datascience/comments/ydbkaf/object_detection_model_learns_backgrounds_and_not/
|
Training a model for object detection
|
You could try a ["Mask R-CNN"](https://github.com/matterport/Mask_RCNN) The github page contains a keras model that could be trained by yourself. And there is also a link to an article how to train the model yourself step-by-step.
|
115586
|
1
|
115593
| null |
2
|
807
|
im trying to build an outlier detector to find outliers in test data. That data varies a bit (more test channels, longer/shorter testing).
First im applying the train test split because i want to use grid search for hypertuning. This is timeseries data from multiple sensors and i removed the time column beforehand.
```
X shape : (25433, 17)
y shape : (25433, 1)
X_train, X_test, y_train, y_test = train_test_split(X,
y,
test_size=0.33,
random_state=(0))
```
Standardize afterwards and then i changed them into an int Array because GridSearch doesnt seem to like continuous data. This surely can be done better, but i want this to work before i optimize the coding.
```
'X'
mean = StandardScaler().fit(X_train)
X_train = mean.transform(X_train)
X_test = mean.transform(X_test)
X_train = np.round(X_train,2)*100
X_train = X_train.astype(int)
X_test = np.round(X_test,2)*100
X_test = X_test.astype(int)
'y'
yeah = StandardScaler().fit(y_train)
y_train = yeah.transform(y_train)
y_test = yeah.transform(y_test)
y_train = np.round(y_train,2)*100
y_train = y_train.astype(int)
y_test = np.round(y_test,2)*100
y_test = y_test.astype(int)
```
I chose the IForrest because its fast, has pretty good results and can handle huge data sets (i currently only use a chunk of the data for testing). Setting Up the GridSearchCV:
```
clf = IForest(random_state=47, behaviour='new',
n_jobs=-1)
param_grid = {'n_estimators': [20,40,70,100],
'max_samples': [10,20,40,60],
'contamination': [0.1, 0.01, 0.001],
'max_features': [5,15,30],
'bootstrap': [True, False]}
fbeta = make_scorer(fbeta_score,
average = 'micro',
needs_proba=True,
beta=1)
grid_estimator = model_selection.GridSearchCV(clf,
param_grid,
scoring=fbeta,
cv=5,
n_jobs=-1,
return_train_score=True,
error_score='raise',
verbose=3)
grid_estimator.fit(X_train, y_train)
```
The Problem:
I cant fit the grid_estimator.
GridSearchCV needs an y_argument, without `y` its passing me the "missing y_true" error.
What should be used as a target here ? Atm i just passed an important data column to `y` for testing, but im getting this error that i dont understand:
```
ValueError: Classification metrics can't handle a mix of multiclass and continuous-multioutput
targets
```
I also got the advice that the i need a scoring function and the iForest doesnt have one.
I couldnt find useful information for this, are there any helpful guides or info that can help me ?
|
Can GridSearchCV be used for unsupervised learning?
|
CC BY-SA 4.0
| null |
2022-10-25T16:52:19.477
|
2022-10-26T09:32:48.320
| null | null |
141984
|
[
"python",
"outlier",
"grid-search",
"isolation-forest"
] |
The goal of `GridSearchCV` is to iterate over (hence search) all possible combinations (hence grid) of hyper parameters and evaluate a model on a cross-validation (hence CV). You do need some score to compare models with different sets of hyper parameters. If you can come out with some reasonable way to score a model after the fit, you can write a custom scoring function. If this scoring function does not require target (y) to be computed, you can simply pass an array of zeros to `GridSearchCV`. The example of such scorer is given [here](https://stackoverflow.com/questions/58186702/using-gridsearchcv-with-isolationforest-for-finding-outliers).
Otherwise, if you use some supervised model on a filtered (by IsolationTrees) data, you can do that using Pipelines, and run GridSearchCV on that, see [examples](https://scikit-learn.org/stable/modules/compose.html) in sklearn docs:
```
from sklearn.pipeline import Pipeline
from sklearn.ensemble import IsolationForest
estimators = [('filter_data_it', IsolationForest()),
('clf', LogisticRegression())]
pipe = Pipeline(estimators)
param_grid = dict(filter_data_it__max_features=[5,15,30], clf__C=[0.1, 10])
grid_search = GridSearchCV(pipe, param_grid=param_grid)
```
recall, that when you use Pipelines you need to prepend `param_grid` with the name of the pipeline step.
UPD1. As stated in the comments IF don't have method `transform`, thus simple chaining will not work. The way IF works is by predicting outliers and not by filtering the data (you are supposed to filter outliers afterwards). However, there is a way around this problem. We need to create a new class with transform method, which will run IF, and filter the data based on its predictions. I will update the code snippet.
It turns out there is no clear way to adapt sklearn API for that purpose, as stated in these questions, [1](https://stackoverflow.com/questions/24896178/sklearn-have-an-estimator-that-filters-samples), [2](https://stackoverflow.com/questions/18602489/using-a-transformer-estimator-to-transform-the-target-labels-in-sklearn-pipeli), also [this](https://stackoverflow.com/a/24917063/9697134) answer suggest a solution, however it is relatively complex. Thus, I suggest you proceed with scorer example.
|
Is GridSearchCV in combination with ImageDataGenerator possible and recommendable?
|
As promised, here you can find an example of how you could apply kfold cross validation for a defined convolutional neural network model, applied to an augmented dataset. You can find the code as a simple gist [here](https://gist.github.com/GermanCM/03754e11ac7e9a6343754ff389eb47f0)
It is done as follows:
- for a subset of the CIFAR10 images dataset, generate 3 augmented images (by applying horizontal_flip) per original image, so we should finally have as the number of final images in the augmented dataset: 'number of images in the original dataset' * 3.
[](https://i.stack.imgur.com/A81Iw.png)
- check that indeed the built augmented dataset has the new expected number of images. We have just created the augmented dataset, not the fit step yet
[](https://i.stack.imgur.com/LZKvO.png)
- apply kfold cross validation on the augmented dataset for several hiperparameters combinations; in this example, 3 pairs of 'learning rate-momentum' have been tried. It is made via the usual 'fit' method:
- display the results in a dataframe
[](https://i.stack.imgur.com/wAl6r.png)
This way, we have applied hyperparametrization via kfold cross validation; not a full grid search but only with 3 pairs of hiperparams, but the idea would be the same, not depending on the fit_generator method but making yourself your k folds cross validation on the generated augmented dataset. We could also include other data augmentation strategies in this cross validation.
|
115599
|
1
|
115621
| null |
0
|
41
|
I’m new to the world of NLP and am looking for some guidance. I want to create a rule-based system that “grades” text in accordance to some set of criteria. For example, one criteria could be “The author mentions that he/she wants money”, another “The author mentions working toward promotion”.
My initial idea was to use some available, open-source NLP-model, such as en_core_web_lg from the spaCy library. With such a model I could look at all verbs in a text, and classify texts as adhering to certain criteria when they have an appropriate verb with appropriate subject and object. I’ve read somewhere that exploiting the linguistic structure of sentences is a bad/unreliable way to go about things. The problem is that I don’t have any substantive data so as to allow supervised learning.
How do one typically go about creating a rule-based system for such a task? Is there any name for the problem I want to solve, maybe “Multi-label classification”? Any resources you could point me to?
Help a noob out!
I greatly appreciate it.
|
Best approach for rule-based system in multilabel classification-problem?
|
CC BY-SA 4.0
| null |
2022-10-26T07:41:05.707
|
2022-10-26T16:52:14.273
| null | null |
142006
|
[
"nlp",
"multilabel-classification",
"spacy"
] |
I think the first step should be to define the task more formally: do you mean there should be a grade for every such criterion? Is the set of the criteria fixed or an input parameter?
From the point of view of people understanding what you want to do you should also mention how long is a text, and how many texts, how many criteria? And if possible add an actual example with its expected output.
>
The problem is that I don’t have any substantive data so as to allow supervised learning.
This is a serious issue not only because you can't do supervised learning, but also because you can't evaluate the system. Evaluation is a must: if you don't know how well your system works, you can't guarantee anything about its output... so it's pointless to use it.
You should probably manually annotate a sample yourself. It might feel boring but it's actually useful indirectly for you to design the task correctly, because it forces the annotator to think about the details.
>
I’ve read somewhere that exploiting the linguistic structure of sentences is a bad/unreliable way to go about things.
I don't know where you read that but this is wrong. This approach might not be optimal compared to modern ML methods, but it can be a perfectly decent method.
>
Is there any name for the problem I want to solve, maybe “Multi-label classification”?
“Multi-label classification” represents a broad type of tasks, and that's not even the correct type if you want to predict grades :) Grades are numerical values so yours would be a regression task, as opposed to classification.
Anyway this is not the name of a specific problem, and it's very unlikely that your problem has a standard name or method.
>
How do one typically go about creating a rule-based system for such a task?
That's the design part and it's not easy. You need to study examples, try to find the clues that a human would use to decide, then try to transform these clues into actionable rules.
To be honest, a rule-based system for some kind of highly semantic and interpretative task is unlikely to perform very well, but why not try.
|
approach for multi label text classification
|
So, you are asking about how to develop this system / model, which can classify text.
Yes, it is a great idea to instantiate a "baseline" or dummy model, which can be rule-based or randomly assigns a label to a certain piece of text.
From this dummy model, yes you can then use RNN/LSTMs that does multiple-inputs (e.g. words in text) to single output probability over classes as a more sophisticated model and yes you would then compare the validation and test accuracy, F1-score, etc. to see if that improvement to the model is warranted by the change in the model's functionality to classify the texts.
|
115604
|
1
|
115611
| null |
1
|
52
|
I have multiple time series (about 200) of soil moisture behavior after saturation in different soil types. They are all the same length and nearly the same shape, differing only in their ultimate value and rate of soil moisture decline due to the effects of different soil properties.
What I need is an RNN model that can predict the time series with only one sequence as input. This RNN must be able to detect, at least internally, which of the 200 training sequences the input sequence corresponds to and then predict the next values. Is something like this possible? What I tried was to concatenate all the time series into one and I trained an RNN with 3 layers and different numbers of hidden units, but I didn't get good results. Should I increase the complexity of the model or try a new approach?
|
Is it possible to train a RNN using multiple time series?
|
CC BY-SA 4.0
| null |
2022-10-26T08:11:52.997
|
2022-10-26T13:59:42.500
|
2022-10-26T13:59:42.500
|
142010
|
142010
|
[
"machine-learning",
"deep-learning",
"neural-network",
"time-series",
"lstm"
] |
Yes, you can use a Multivariate RNN.
### Multivariate RNN
In this architecture multiple sequential features (i.e., a number of sequneces) as an input to your recurrent layers.
Taking pytorch as a reference, you can see that the input of LSTM object is a tensor of shape
$$input = (L, H_{in})$$ where $L$ is the length of your sequences whereas $H_{in}$ is the number of input features* (i.e., a number of sequences). I attach below a couple of resources in case they are helpful:
- Keras implementation: https://machinelearningmastery.com/multivariate-time-series-forecasting-lstms-keras/
- Pytorch implementation: https://stackoverflow.com/a/56893248
Hope it helps!
---
* Input can also have $(L, N, H_{in})$ for $N$ batches.
|
training neural net with multiple sets of time-series data
|
Yes, this is a straightforward application for neural networks. In this case yk are the outputs of the last layer ("classifier"); xk is a feature vector and yk is what it gets classified into. For simplicity prepare your data so that N is the same for all. The problem you have is perhaps that in the case of time series you won't have enough data: you need (ideally) many 1000's of examples to train a network, which in this case means time series, not points. Look at the specialized literature on neural networks for time series prediction for ideas on network architecture.
Library: try Pylearn2 at [http://deeplearning.net/software/pylearn2/](http://deeplearning.net/software/pylearn2/) It's not the only good option but it should serve you well.
|
115607
|
1
|
115613
| null |
0
|
24
|
I am working on a task where I need to predict one of the following stances for a tweet: "In favor", "Against", "Neutral", "Not related", and "Yes if". I've been trying to use scikit-learn and transformers for classification, but both seem to produce quite poor results. The problem is that the categories are not usual categories, but rather the attitude of the writer toward a specific topic, which probably should be tackled differently. I think there should be something that works with stances, but I managed to find only sentiment analysis and topic modeling tutorials so far. Is there anything I can take a look at? Any links, models, and advice would be greatly appreciated!
|
What is the best approach to tackle stance prediction?
|
CC BY-SA 4.0
| null |
2022-10-26T09:23:58.437
|
2022-10-27T13:40:37.557
| null | null |
118594
|
[
"machine-learning",
"python",
"classification",
"nlp"
] |
Perhaps you could fit two (or three) models. The first one asks is the question "related" or not? Then you can fit to 4 ordinal classifications. Finally, if you wanted to get even fancier, you could add a model to analyze your Yes responses and deterimine if they were a "Yes if" or not.
This is likely not optimal but just an approach.
The obvious alternative is fit to 5 categorical levels with no sense of scale or order. Probably less efficient though.
|
What methods are there for predicting a signal?
|
- I don't think you can compress time series because there is a risk of losing valuable data. Rather than that, you can set a the max size as the default size, and set zeros to the left for smaller data.
If the sampling is too high (ex: milli seconds), do not hesitate to reduce it for all data (ex: seconds) taking the average values, as long as the prediction objectives allows it. Furthermore, the further you want to predict, the worst the prediction generally is: that's why a lower sampling rate could be useful.
- RNN and LSTM are also good solutions, in addition to ARIMA. However, they are quite sensitive to noise: if your signals are quite noisy, try to reduce the noise to have good predictions.
Keep in mind that time series prediction with NN is not an exact science: you may have to apply many modifications and improvements on your data to reach very good results.
Here is a notebook that could be useful:
[https://github.com/ageron/handson-ml2/blob/master/15_processing_sequences_using_rnns_and_cnns.ipynb](https://github.com/ageron/handson-ml2/blob/master/15_processing_sequences_using_rnns_and_cnns.ipynb)
|
115631
|
1
|
115639
| null |
1
|
46
|
I have a problem. I have trained a model. And as you can see, there is a zigzag in the loss. In addition, the validation loss is increasing.
What does this mean if you only look at the training curve? Is there an overfitting?
And the model does not generalise. Accuarcy on test and val is 0.84 and on the test 0.1. Does the assumption confirm the overfitting? And can overfitting come from the fact that I have trained too little? I only used two dense layers.
[](https://i.stack.imgur.com/8waoJ.png)
|
What does that mean if the loss looks like this?
|
CC BY-SA 4.0
| null |
2022-10-27T08:20:53.677
|
2022-10-27T11:58:23.447
|
2022-10-27T11:58:23.447
|
102852
|
130860
|
[
"deep-learning",
"neural-network",
"training",
"accuracy",
"loss"
] |
Please notice that your loss oscillates between 175 and zero. In which case I would look for potential problems in the code with respect to
- loss calculation
- batch size (increase)
- train/validation set split strategy (stratification wrt class)
In a more general sense:
- size of your network may be small
- activation function saturation (avoid saddle points - use relu)
- learning rate
- normalisation before training
I hope these are helpful as a starting point. I would like to also point to this resource wrt [training and fine tuning a deep learning model](https://stats.stackexchange.com/a/352037/110383).
Hope this helps!
|
Meaning of this notion in 0-1 loss?
|
Your understanding is correct.
This is known as the [indicator function](https://en.wikipedia.org/wiki/Indicator_function).
The indicator function of a subset $A$ of a set $X$ is a function
$$1_A(x)= \begin{cases}1, & x \in A \\ 0, & x \notin A \end{cases}$$
|
115644
|
1
|
115650
| null |
0
|
52
|
I found the distribution of my data with "distfit" library for python. But what now? The best distribution that describes my data is "weibull" distribution. But I don't know what can I do with this knowledge. Can someone help?
|
What is the next after finding best distribution for my data?
|
CC BY-SA 4.0
| null |
2022-10-27T13:38:26.090
|
2022-10-27T14:29:54.027
| null | null |
127623
|
[
"statistics",
"data",
"data-cleaning",
"distribution"
] |
Imho there's probably nothing to do with this information, especially considering only the technical side of it.
It's highly subjective, but I think that what people mean when they say to "know the distribution of your data" is that it is useful to have an intuitive understanding of what your data consists of: main stats, characteristics, how much variance, imbalance, important patterns between variables, etc. This information, put together with the expert knowledge related to the specific task at hand, would normally help an experienced data scientist decide the design of the system (what kind of algorithm, preprocessing, etc.).
But it's not a recipe, you can't expect to follow deterministic steps like with a manual. It's more an analysis depending on the context, the time one wants to spend, etc. My advice for improving the performance of any system: start by investigating a sample of the errors it makes. See if these errors are preventable (they might not be), and if yes what prevents the system to find the correct answer.
|
Should i always transform data to normal distribution?
|
The short answer is no, you don't always need to transform your data to a normal distribution.
This depends a lot on the learning algorithm you're using. Additionally, you should treat continuous and categorical variables differently.
Continuous variables:
Tree-based models such as Decision Trees, Random Forest, Gradient Boosting, XGBoost, and others, are not affected by the distribution of your data.
However, algorithms like Linear Regression, Logistic Regression, KNN or Neural Nets can be highly affected by both the distribution and scale of your data. You will likely both get better results and finish training the model faster if you transform the data for these algorithms.
Categorical variables:
Independently of what algorithm you're using, you should one-hot-encode nominal categorical variables (this is the most common way, but there are other approaches such as Feature Hashing and Bin-counting that might work better if you have many categories). If they're ordinal, you should keep them as they are (given that they are integers, and if not, convert them to integers while maintaining the implied order).
Extra side note:
Also, make sure to not scale the entire dataset at once to prevent data leakage. Instead, scale your train set, then apply the same scaler on the test set, as explained in [this](https://stackoverflow.com/a/50567308/8162025) SO answer.
|
115713
|
1
|
115715
| null |
0
|
42
|
Let's assume we habe an unbalanced dataset: 90% of the data belong to class A, 10% belong to class B. Furthermore, there are around as many points from class B inside of class A's cluster. Someone with a lot of expertise told me that models will weight class A more in that area.
But as far as I know, models don't just automatically weight the classes. Am I wrong? How would different models behave and why?
|
Unbalanced Classification: What happens when many points of the bigger class are inside of the smaller class' area?
|
CC BY-SA 4.0
| null |
2022-10-30T07:27:56.643
|
2022-10-30T10:42:53.440
| null | null |
141811
|
[
"machine-learning",
"machine-learning-model",
"data-science-model"
] |
So if we take simple classification model like KNN, there are ways to handle this kind of imbalance in data. And also this kind of issues are largely seen in real world datasets.
In KNN we can use distance based weights and helping us in predicting classes. Checkout parameter weight here [KNN](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html). By default model considers uniform, but if u know u have imbalance then use weights = 'distance'.
In based classifiers as well u can see this. Check class_weight section [DT_Classifier](https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html). This by default considers it as None i.e all classes have same weightage.
There are some other ways to deal with this issue,
- UpSample minority class
- Downsample majority class
- Use SMOTE (it creates new data based on existing points) -> model training time has impact here. SMOTE should only be used on training data never on testing data.
|
How to deal with class imbalance in a neural network?
|
This is for classification, and I am not sure if it is possible to extend them to reinforcement learning.
As you figured out, accuracy should not be used as a metric for a dataset as imbalanced as the one you have. Instead, you should look at a metric such as Area Under Curve(AUC). If you would have infinite data, then you could just rebalance and remove some of the data from the class that has the most samples. However, in many cases data is sparse and you want to use as much of it as possible. Removing data can have a disastrous effect on many applications.
So what are good and convenient ways of handling this?
- Add weights to the loss function. One weight for class A and one for B. By increasing the magnitude of the loss for the B class the model should not get stuck in a suboptimal solution that just predicts one class.
- Use another objective(loss) function. F1-score can, for example, be implemented and used as an objective(loss) function.
What is great with these approaches is that it will allow you to use all the data.
|
115728
|
1
|
115729
| null |
0
|
179
|
I am trying to replace 2 missing NaN values in data using the SimpleImputer.
I load my data as follow;
```
import pandas as pd
import numpy as np
df = pd.read_csv('country-income.csv', header=None)
df.head(20)
```
[](https://i.stack.imgur.com/z8Nc7.png)
As we can see I have 2 NaN values which I am trying to replace with mean() values using SimpleImputer and I get the following error:
```
imputer = SimpleImputer(missing_values=np.nan, strategy='mean', fill_value=None)
imputer.fit(df)
```
[](https://i.stack.imgur.com/Y4ZJW.png)
Because I have some categorical data (hence the error), I tried to take only the numeric columns so I tried this method:
```
missing_vars_numeric = [var for var in df.columns
if df[var].isnull().mean() > 0 and df[var].dtype != "0"]
missing_vars_numeric
Output: [1,2]
```
But when I use `missing_vars_numeric in the imputer I get the following error:
```
imputer = SimpleImputer(missing_values=np.nan, strategy='mean', fill_value=None)
imputer.fit(missing_vars_numeric)
ValueError: Expected 2D array, got 1D array instead:
array=[1. 2.].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
```
[](https://i.stack.imgur.com/qtoF2.png)
I also tried using astype() and It did not work for me. What am I missing?
Sample of the DataFrame
```
df = pd.DataFrame({'0': ['Region', 'India','Brazil', 'USA','Brazil','USA','India','Brazil','India','USA','India'],
'1': ['Age', '49', '32', '35','43','45','40','NaN','53','55','42'],
'2': ['Income', 86400, 57600, 64800,73200,'NaN',69600,62400,94800,99600,80400],
'3': ['Online Shopper','No','Yes',' No',' No','Yes','Yes','No','Yes','No','Yes']},
index=['0', '1', '2', '3','4','5','6','7','8','9','10'])
```
|
Using Simple imputer replace NaN values with mean error
|
CC BY-SA 4.0
| null |
2022-10-30T15:58:16.447
|
2022-11-02T07:34:45.360
|
2022-11-01T14:49:57.367
|
142196
|
142196
|
[
"pandas",
"dataframe"
] |
## One-liner
```
df.fillna(df.select_dtypes(np.number).mean(), inplace=True)
```
- df.select_dtypes(np.number) selects only the numeric columns of the dataframe
- .mean() computes the mean of each column, returning a new dataframe
- df.fillna() accepts a dataframe (or other forms) to impute NaNs in named columns
- inplace just means it happens in the original dataframe itself, without making a copy
You can probably use this to accomplish many more variations of imputation - replacing `.mean()` with whatever you need.
---
# Update with df from OP
The example dataframe you provided has columns with mixtures of data types. Every column contains strings (e.g. `'49'` is a string). Only column `2` contains integer types.
When a pandas column contains strings, the column's dtype becomes `object`. This type is not part of `np.number`, meaning you cannot select any columns with the method in my `one-liner` solution above.
Note: OP originally showed a CSV being loaded, which pandas likely loaded into the correct data types. The example snippet for `df = pd.DataFrame(...)` gives nearly all values as strings. There is a difference then between the original question and the updated snippet.
## Solution
I will walk you through the steps required based on your example snippet.
In general, you need to ensure that all your column types are correct. E.g. the `Age` column should have type `int`, whereas `Region` is `str`. You need to convert your column types.
```
In [1]: import pandas as pd, numpy as np
In [2]: df = pd.DataFrame({'0': ['Region', 'India','Brazil', 'USA','Brazil','USA','India','Brazil','India','USA','India'],
...: '1': ['Age', '49', '32', '35','43','45','40','NaN','53','55','42'],
...: '2': ['Income', 86400, 57600, 64800,73200,'NaN',69600,62400,94800,99600,80400],
...: '3': ['Online Shopper','No','Yes',' No',' No','Yes','Yes','No','Yes','No','Yes']},
...: index=['0', '1', '2', '3','4','5','6','7','8','9','10'])
...:
In [3]: df
Out[3]:
0 1 2 3
0 Region Age Income Online Shopper
1 India 49 86400 No
2 Brazil 32 57600 Yes
3 USA 35 64800 No
4 Brazil 43 73200 No
5 USA 45 NaN Yes
6 India 40 69600 Yes
7 Brazil NaN 62400 No
8 India 53 94800 Yes
9 USA 55 99600 No
10 India 42 80400 Yes
In [4]: df.dtypes
Out[4]:
0 object
1 object
2 object
3 object
dtype: object
```
Column names stored as the first row, so make them actual column names and remove that first row:
```
In [5]: column_names = df.iloc[0].tolist()
In [6]: df = df.iloc[1:]
In [7]: df.columns = column_names
```
The missing NaN values are stored as string: replace with `numpy.nan`:
```
In [8]: df[df == "NaN"] = np.nan
```
Convert the types of all columns, using a column names to type mapping - not all `object`. Note that np.nan is actually a `float` - so we can't use `int`:
```
In [9]: df = df.astype({"Region": str, "Age": float, "Income": float, "Online Shopper": bool})
In [10]: df
Out[10]:
Region Age Income Online Shopper
1 India 49.0 86400.0 True
2 Brazil 32.0 57600.0 True
3 USA 35.0 64800.0 True
4 Brazil 43.0 73200.0 True
5 USA 45.0 NaN True
6 India 40.0 69600.0 True
7 Brazil NaN 62400.0 True
8 India 53.0 94800.0 True
9 USA 55.0 99600.0 True
10 India 42.0 80400.0 True
In [11]: df.dtypes
Out[11]:
Region object
Age float64
Income float64
Online Shopper bool
dtype: object
```
The one-liner solution now works:
```
In [12]: imputed_df = df.fillna(df.select_dtypes(np.number).mean())
In [13]: imputed_df
Out[13]:
Region Age Income Online Shopper
1 India 49.000000 86400.000000 True
2 Brazil 32.000000 57600.000000 True
3 USA 35.000000 64800.000000 True
4 Brazil 43.000000 73200.000000 True
5 USA 45.000000 76533.333333 True
6 India 40.000000 69600.000000 True
7 Brazil 43.777778 62400.000000 True
8 India 53.000000 94800.000000 True
9 USA 55.000000 99600.000000 True
10 India 42.000000 80400.000000 True
```
You might want to convert the columns to new types, e.g. making `Age` of type `int`. I will leave this is an exercise for you. I think this shows you many of the tools you will need to work it out.
|
Substituting nan values with mean code
|
Here, you are substituting the missing values (nans) with something, it can be either the most frequent data, median, average(mean), whatever. mean modifies the nan values with the average of not-nan values.
```
for x in num_cols:
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
imp.fit(np.array(ds[x]).reshape(-1,1))
ds[x] = imp.transform(np.array(ds[x]).reshape(-1,1))
```
>
imp = SimpleImputer(missing_values=np.nan, strategy='mean')
For transforming missing values we use `SimpleImputer`, here, the missing values are `np.nan` and we want to use the `mean` strategy for transforming these nans. Thereby, their values would be substituted by the average value of each feature.
>
imp.fit(np.array(ds[x]).reshape(-1,1))
You have an $n * m$ (`array (np.array(ds[x]))`), by `reshape(-1,1))` you convert it to shape $(m*n, 1)$ -1 here stands for vertical array and 1 stands for having one column.
Then you fit your transformer (imp) to your data `(np.array(ds[x]).reshape(-1,1))`
>
ds[x] = imp.transform(np.array(ds[x]).reshape(-1,1))
Here, the transformed data would be substituted in ds[x]
|
115754
|
1
|
115755
| null |
0
|
26
|
I need to create random data using this lines
```
n_samples = 3000
X = np.concatenate((
np.random.normal((-2, -2), size=(n_samples, 2)),
np.random.normal((2, 2), size=(n_samples, 2))
))
```
but didn't get the difference between two lines of random here . I got that this way be used to concatenate two random numbers to create 2 clusters but why one of them using (-2,-2) and the other (2,2) and does 2 in this size because concatenate using to merge 2 groups of random data or not ?
|
How can i get this way to create random data?
|
CC BY-SA 4.0
| null |
2022-10-31T13:40:53.933
|
2022-10-31T14:31:13.433
|
2022-10-31T14:31:13.433
|
75157
|
141023
|
[
"python",
"numpy",
"gaussian"
] |
Providing multiple values to either the `loc` or `scale` arguments can be used to generate multiple random distributions at once with different parameters. In the code you provided the values for the `loc` argument are the same, meaning that you could also just use the value `-2` instead of `(-2, -2)`. You can see this when fixing the seed and generating new numbers
```
import numpy as np
np.random.seed(0)
print(np.random.normal((-2, -2), size=(5,2)))
# [[-0.23594765 -1.59984279]
# [-1.02126202 0.2408932 ]
# [-0.13244201 -2.97727788]
# [-1.04991158 -2.15135721]
# [-2.10321885 -1.5894015 ]]
np.random.seed(0)
print(np.random.normal(-2, size=(5,2)))
# [[-0.23594765 -1.59984279]
# [-1.02126202 0.2408932 ]
# [-0.13244201 -2.97727788]
# [-1.04991158 -2.15135721]
# [-2.10321885 -1.5894015 ]]
```
The different between the two lines is that one is generating random noise from a normal (Gaussian) distribution with a mean of -2 and the other from a mean of 2, see also the `loc` keyword in [the documentation](https://numpy.org/doc/stable/reference/random/generated/numpy.random.normal.html).
|
Pull Random Numbers from my Data (Python)
|
If what you want is to generate random numbers with the same distribution as your cashflow numbers I recommend you using Python's [Fitter](https://pypi.org/project/fitter/) package
It is powerful and very simple to use.
You can in this way use it to find the distribution of your data and then generate random numbers with the same distribution.
From documentation:
```
from scipy import stats
data = stats.gamma.rvs(2, loc=1.5, scale=2, size=10000)
from fitter import Fitter
f = Fitter(data)
f.fit()
# may take some time since by default, all distributions are tried
# but you call manually provide a smaller set of distributions
f.summary()
```
Also useful resources might be found in [stackoverflow](https://stackoverflow.com/questions/6620471/fitting-empirical-distribution-to-theoretical-ones-with-scipy-python)
|
115759
|
1
|
115775
| null |
5
|
101
|
It is not clear to me what advantage the EDA data visualization provides. By advantage I mean what decision I will make according to one or the other visualization.
Could someone give me an example where the data visualization makes me decide for one or the other algorithm ?
i.e from the book "Introduction to ml with python"
Visualising datasets before fitting any models can be extremely useful. It allows us to see obvious patterns and relationships,and may suggest a sensible form of analysis. With multivariate data, finding the right kind of plot is not always simple, and many different approaches have been proposed.
[](https://i.stack.imgur.com/FzjNi.png)
How does whether I have seen this visualization or not change the way to proceed?
|
What advantages does Data Visualization have in EDA?
|
CC BY-SA 4.0
| null |
2022-10-31T16:03:45.320
|
2022-11-02T11:55:20.347
|
2022-11-02T11:55:20.347
|
79520
|
64726
|
[
"machine-learning",
"visualization",
"data-analysis"
] |
First, visualization is just an easy and intuitive way to understand underlying patterns in your data. Everything that you can achieve through this, can also be achieved through painstakingly printing different values and statistics.
I will just mention two simple examples of algorithms chosen because of patterns in the data. They are very simple, but they can be generalized.
- Regression
If you find out that the data is linear, Linear Regression can be a good choice of algorithm
- Classification
If the data are linearly separable, SVM is suitable
These are visualizations of the datapoints themselves, but other visualizations like histograms can help find underlying distributions too.
In addition, visualization can be useful in other parts of the process. For example, if you see a normal distribution, you can impute missing data using the mean value, while for a skewed distribution the median is more suitable.
|
Purpose of visualizing high dimensional data?
|
I take Natural Language Processing as an example because that's the field that I have more experience in so I encourage others to share their insights in other fields like in Computer Vision, Biostatistics, time series, etc. I'm sure in those fields there are similar examples.
I agree that sometimes model visualizations can be meaningless but I think the main purpose of visualizations of this kind are to help us check if the model actually relates to human intuition or some other (non-computational) model. Additionally, Exploratory Data Analysis can be performed on the data.
Let's assume we have a word embedding model built from Wikipedia's corpus using [Gensim](https://radimrehurek.com/gensim/models/word2vec.html)
```
model = gensim.models.Word2Vec(sentences, min_count=2)
```
We would then have a 100 dimension vector for each word represented in that corpus that's present at least twice. So if we wanted to visualize these words we would have to reduce them to 2 or 3 dimensions using the t-sne algorithm. Here is where very interesting characteristics arise.
Take the example:
vector("king") + vector("man") - vector("woman") = vector("queen")

Here each direction encode certain semantic features. The same can be done in 3d
[](https://i.stack.imgur.com/ZcNDo.png)
(source: [tensorflow.org](https://www.tensorflow.org/versions/master/images/linear-relationships.png))
See how in this example past tense is located in a certain position respective to its participle. The same for gender. Same with countries and capitals.
In the word embedding world, older and more naive models, didn't have this property.
See this Stanford lecture for more details.
[Simple Word Vector representations: word2vec, GloVe](https://www.youtube.com/watch?v=T8tQZChniMk)
They only were limited to clustering similar words together without regard for semantics (gender or verb tense weren't encoded as directions). Unsurprisingly models which have a semantic encoding as directions in lower dimensions are more accurate. And more importantly, they can be used to explore each data point in a more appropriate way.
In this particular case, I don't think t-SNE is used to aid classification per se, it's more like a sanity check for your model and sometimes to find insight in the particular corpus you are using. As for the problem of the vectors not being in original feature space anymore. Richard Socher explains in the lecture (link above) that low dimensional vectors share statistical distributions with its own larger representation as well as other statistical properties which make plausible visually analyse in lower dimensions embedding vectors.
Additional resources & Image Sources:
- A Word is Worth a Thousand Vectors
- Motivation Why Learn Word Embeddings
|
115836
|
1
|
115841
| null |
4
|
798
|
I have an imbalanced dataset and I want to train a binary classifier to model the dataset.
Here was my approach which resulted into (relatively) acceptable performance:
1- I made a random split to get train/test sets.
2- In the training set, I down-sampled the majority class to make my training set balanced. To do that, I used the `resample` method from `sklearn.utils` module.
3- I trained the model, and then evaluated the performance of the model on the `test` set (which is unseen and still imbalanced).
I got fairly acceptable results including `precision`, `recall`, `f1` score and `AUC`.
Afterwards, I wanted to try out something. Therefore, I flipped the labels in both training set and testing set (i.e. converting `1` to `0` and `0` to `1`).
Then I repeated the step 3 and trained the model again with flipped labels. This time, the performance of model dropped and I got much lower `precision` and `f1` score on test set.
Additional details:
The model was trained with `GridSearchCV` using a `LogisticRegression` estimator.
I have then two question: is there anything wrong with my approach (i.e. downsampling)?
and how come flipping the label led into worse results?
I have a feeling that it could be due to the fact that my test set is still imbalanced. But more insight will be appreciated.
|
Flipping the labels in a binary classification gives different model and results
|
CC BY-SA 4.0
| null |
2022-11-03T14:38:02.120
|
2022-11-03T15:55:14.343
| null | null |
54584
|
[
"python",
"classification",
"scikit-learn",
"class-imbalance",
"imbalanced-data"
] |
First I'd like to say that you're asking the right questions, and doing an experiment like this is good way to understand how things work.
- Your approach is not wrong by itself and the performance difference is not due to downsampling. Actually resampling rarely works well, it's a very simplistic approach to handle class imbalance, but that's a different topic.
- The second question is more important, and it's about what precision and recall mean: these measures rely on which class is defined as the positive class, thus it is expected that flipping the label would change their value.
For example, let's assume a confusion matrix:
```
A B <- predicted as class
A 9 1
B 10 70
^
true class
```
Precision is the proportion of correct predictions (true positive, TP) among instances predicted as positive (TP+FP):
- if A = positive: 9/(9+10) = 0.47
- if B = positive: 70/(10+70) = 0.87
Recall would also be different. The logic of these measures is that the task is defined with a specific class as "main target", usually the minority class (A in this example). So flipping the labels is like changing the definition of the task, there's no reason that the performance has to be the same.
Note: accuracy would have the same value no matter the positive class, and this is why it's not recommended with an imbalanced dataset (it gives too much weigth to the majority class).
|
Flipping the labels in a classification problem
|
To begin with, I don't quite get it why you want to flip them. In the binary case, you flip Negatives and Positive, so True Negative becomes True Positive and so do FP/FN. Hence you flip specificity/true negative and sensitivity/recall values, so overall accuracy and F1 stay the same.
|
115838
|
1
|
115842
| null |
0
|
982
|
I have a data set with multi-labels. I am trying to generate the ROC curves. Unfortunately, I can not use the code which I frequently used while doing binary classification. How should I modify the code in order to be able to get the ROC curves in a multi-label scenario ? In the error message it says,
>
multiclass format is not supported
The code that I use is:
```
from sklearn import metrics
import matplotlib.pyplot as plt
from sklearn.metrics import roc_curve, auc
plt.figure()
models = [
{
'label': 'Logistic Regression',
#'model': LogisticRegression(),
'y_pred': predict_proba[:,1]
},
{
'label': 'SVM',
#'model': SVM(),
'y_pred': preds
},
{
'label': 'RandomForestClassifier',
#'model': RandomForestClassifier(),
'y_pred': Y_Pred_proba[:,1]
},
]
for m in models:
print('LABEL:', m['label'])
y_pred = m['y_pred']
fpr, tpr, thresholds = metrics.roc_curve(y_test, y_pred)
# fpr, tpr, thresholds = roc_curve(y_test, y_pred, pos_label= 'ovr')
auc = metrics.roc_auc_score(y_test, y_pred)
# Now, plot the computed values
plt.plot(fpr, tpr, label='%s ROC (area = %0.2f)' % (m['label'], auc))
# Custom settings for the plot
plt.plot([0, 1], [0, 1],'r--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('1-Specificity(False Positive Rate)')
plt.ylabel('Sensitivity(True Positive Rate)')
plt.title('Receiver Operating Characteristic')
plt.legend(loc="lower right")
plt.show()
```
|
How to get ROC curves in a multi-label scenario
|
CC BY-SA 4.0
| null |
2022-11-03T15:44:34.557
|
2022-11-03T16:03:54.477
| null | null |
142350
|
[
"machine-learning"
] |
`ROC` is a way to evaulate how well a classifier can separate one class-distribution from another in a given dataset. For a multiclass setting this is per definition not possible. What you can do is, either treat this as a "One vs Rest"-scenario, where you evaluate the performance of your classifier in separating one class from all the others combined, repeating this for every class or you treat this as a "One vs One"-scenario where you compare every possible combination of two classes.
You can find an example with illustrations here (not mine):
[ROC Curve - Multiclass.ipynb](https://github.com/vinyluis/Articles/blob/main/ROC%20Curve%20and%20ROC%20AUC/ROC%20Curve%20-%20Multiclass.ipynb)
|
Multiclass ROC Curve using DecisionTreeClassifier
|
The combination of `class_weight` and `indices_test` only have 10 data points results in `Class label 2 not present`.
Since iris dataset is perfectly balanced there is no reason to specify `class_weight`. Additionally, scikit-learn has `train_test_split` which automatically makes a split that maintains equal proportions of every class in both training and testing.
|
115851
|
1
|
115885
| null |
1
|
41
|
I have 5 broad metagenomic "ecoregion" categories (just think lots of DNA at different nice locations) which become the training targets for their complete (and augmented) metagenomic data. Any standard model works fine notably random forest, naive Bayes, SVM, confusion matrix is okay and ROC fine. These are small data sets of 10E5 to 10E6.
The categories are very broad and most predictions (meta genomic data from other "ecoregions"), will fall between these categories. ML in contrast will 'relocate' the prediction into 1 of 5 categories. So thus if I've a "woodland" category and a "lake" category, a marsh will fall "in between" the trained classification, but ML will call it either a 'wood' or a 'lake'.
How do I attain that in-between classification status via ML?
|
ML for predictions that exist "in-between" classification targets
|
CC BY-SA 4.0
| null |
2022-11-03T21:11:07.903
|
2022-11-04T15:04:19.623
|
2022-11-04T15:04:19.623
|
67203
|
67203
|
[
"machine-learning",
"classification",
"machine-learning-model"
] |
The task should be reframed as regression or ordinal classification. Using the ecosystem metaphor, a regression target could be the amount of ground covered in water or an ordinal classification target could be sequenced categories of landscapes.
|
Binary classification to predict various targets
|
If you create 19 models, each model will learn weights corresponding to predictors and target variable values.
For example- When you create one model by taking col1 as target, the model will learn weights for giving output col1. When you take another column, lets say col2 as target, if you use the previous model, it won't identify the difference between col1 and col2 (i.e., target) as your data is binary. The model will continue to learn weights for each column, but it won't be the correct one as it will be result of learning all target columns.
You could have used multi-label classification if you wanted to predict different targets at same time using same predictors for each target. Since this is not the case, the only way for getting 19 predictions is to create 19 different models, one for each column as target.
|
115878
|
1
|
115912
| null |
2
|
30
|
OK, the best way to describe this is with an example. (admittedly simplified)
I want to predict the speed of drivers on a motorway and I have two input variables
- the nationality of the driver
- how heavy it is raining
Clearly, these 2 are independent of each other, so throwing this into a simple linear regression I get something like speed = intercept + X1[Nationality] + X2Rain_In_Inches + Error
So I may infer from this that British drivers go 7mph slower than Turkish drivers and speed decreases by 2mph for every inch of rain - so far so good
However, the effect of rain is applied across the whole population here, what I am trying o determine is how rain affects the speed of English drivers vs Turkish drivers. For example, I might expect that one is hardly affected and the other is affected a lot.
Is there a neat way to do this without individually building a model for each category? The above is simple but I want to do it with lots of categories and more parameters
I feel like I'm missing something but cant determine what
Thanks
|
What is the best way to determine if there is variable interactivity between independent parameters in a prediction model
|
CC BY-SA 4.0
| null |
2022-11-04T12:53:53.683
|
2022-11-05T15:21:31.743
| null | null |
142386
|
[
"predictive-modeling",
"linear-regression",
"logistic-regression",
"hyperparameter-tuning"
] |
You can add an interaction term to the linear regression model. An interaction term models the effect that one feature has at different levels of another feature.
If nationality is one-hot encoded, you will have to add a separation interaction term for each level of nationality. For example:
$$ Speed = β_0 + β_1Rain + β_2British + β_3Turisk + β_4(Rain*British)+ β_5(Rain*Turisk)+Error $$
Most statistical software programs can automatically create the one-hot encoding and interaction terms.
|
Could I use some elements of my target variable to predict it?
|
If you would already have those datapoints before a company actually goes into bankruptcy then you can then them in your model since when predicting to the future you could have access to that data. However, if you would only know the data once the bankruptcy event happens (e.g. date of bankruptcy) then you cannot use this variable in your model since you would be leaking data (using data that is in the future and the model would not have access to when the model is actually deployed and used).
|
115911
|
1
|
116376
| null |
0
|
230
|
I'm just curious are there some alternative techniques to word 2 vector representation? So words/phrases/sentences are not represented as vectors but have a different form. Thanks.
|
Alternatives to word to vector embedding
|
CC BY-SA 4.0
| null |
2022-11-05T14:22:52.010
|
2022-11-22T18:10:52.530
| null | null |
136962
|
[
"nlp",
"word-embeddings",
"word2vec"
] |
In your question you talk about vector embeddings or "word 2 vector representation" (word2vec was the first software to train word embeddings). It's important to understand that not all vectors are embeddings:
- Embeddings are short vectors made of real numbers, they were invented around 2010. There are many different types of embeddings, i.e. methods to train the embeddings from a corpus: word2vec, Glove, Elmo, Fasttext, Bert...
- Before this, people were also using vectors representing a "bag of words": one-hot-encoding for a single word, frequency count or TFIDF for a sentence/document. These vectors are long and sparse, i.e. they usually contain a lot of zeros.
These are the most common word representation methods, but there are potentially other alternatives. For example, in [Wordnet](https://wordnet.princeton.edu/) the words are nodes in a graph and relations between words are represented as edges.
|
Output a word instead of a vector after word embedding?
|
[This answer](https://stackoverflow.com/questions/43103265/how-to-do-a-reverse-operation-to-tf-nn-embedding-lookup) describes how you go from a vector in the embedding space back to the the most similar class (e.g. word or character).
|
115931
|
1
|
115934
| null |
0
|
22
|
Let us say I have two machine learning models on different machines and one on the cloud. Comparing them using elapsed times of execution does not make sense since they are powered up by different hardware.
Since all models are equations in the core, why is there no such method to calculate the complexity of these methods?
|
Is there a hardware-independent standard for comparing ML models complexity?
|
CC BY-SA 4.0
| null |
2022-11-06T15:33:12.020
|
2022-11-06T19:20:48.887
|
2022-11-06T19:20:48.887
|
29169
|
128631
|
[
"machine-learning",
"neural-network"
] |
Yes, you can use FLOPS to count the floating point operatons and under stand the CPU cycles required for the algorithms you are building.
In python you can do this with a package/module called `pypapi`
[Here is a basic 'hello world' tutorial of pypapi](https://bnikolic.co.uk/blog/python/flops/2019/09/27/python-counting-events.html)
If you are working in Pytorch specifically there is a valuable and simple tool, `flopth` that shows you FLOPS for your convolutions. [You can read about its implementation here](https://github.com/vra/flopth)
|
Is there any way to explicitly measure the complexity of a Machine Learning Model in Python
|
I have not heard of any model agnostic way to measure model complexity. There are several strategies but they are model dependant.
You can tackle the problem using different families of models.
- For linear models you can count the number of nonzero parameters that is using. Number of features used for the prediction.
- For decision tree you can count the maximum depth that the tree achieves.
- For Neural Networks you can count the number of parameters that your NN is optimizing.
- For ensemble methods (random forest, gradient boosting) you can use an aggregation of the different weak learners used in the model.
For the python implementation there are several implementations depending on for which model you want to measure it. Some of them if you notice are really easy to measure.
Its intuitively hard to compare complexity between different model families. What is more complex a linear regression with 4 coefficients or a decision tree with max_depth=3?
On the topic of deep learning complexity, Hinton, Oriol, Jeff Dean published a paper [Distilling the knowledge of a Neural Network](https://arxiv.org/pdf/1503.02531.pdf). Where they talk about simplifying the complexity of a Neural Network.
|
115952
|
1
|
115955
| null |
0
|
22
|
I have some geospatial data in lat/lon form accurate to 6th decimal place.
As shown in the picture below, there are some over-represented lines of points at specific latitudes which appear in the sample. In that example they are at fixed latitudes (which happen to be evenly spaced, but that is not relevant to the question).
In other cases though, we have observed similar over-represented lines at an arbitrary slant ie. not a fixed latitude.
Is there an algorithm to detect
a) lines of overpresentation in sample lat/lon data at fixed lattitude
or even better
b) a general algoritm to detect lines of overrepresentation at arbitrary slants in geospatial data
?
[](https://i.stack.imgur.com/FQjfp.png)
|
How to denoise overrepresented lines in sample of 2D (geospatial) data?
|
CC BY-SA 4.0
| null |
2022-11-07T05:41:46.557
|
2022-11-07T06:00:12.567
| null | null |
37172
|
[
"data-cleaning",
"geospatial",
"noise"
] |
You might take a look into [Hough Transform](https://en.wikipedia.org/wiki/Hough_transform). It is usually applied on images but it can be easily adapted for 2 dimensional data. There are various packages, but a straight-forward custom implementation would be enough for your purposes. The original Hough transform was used to detect straight line, but suffered some improvements and with some efforts can be employed to detect also other shapes.
|
Rendered Image Denoising
|
Your link is to paid course :) In ray-tracing too few samples will generate something like at the top [](https://i.stack.imgur.com/kT47n.png) In fact the link with the picture answers your question [https://chunky.llbit.se/path_tracing.html](https://chunky.llbit.se/path_tracing.html)
2) Ray-tracing is hard... but not impossible, google for "python ray tracing module"... But something looking close - easily [https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv](https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv) Although actually on the ray-traced images the noise can change because of slope and environment.
If you still want ray-traced noisy images, better to find tutorials for 3D modelling programs, like "ray tracing in 3D Studio MAX tutorial"
|
115961
|
1
|
115990
| null |
1
|
1481
|
I am trying to build a MLP with Keras and an error appears. I do not have experience with neural networks so it is difficult for me. When I run the code for the NN after some time it says:
```
'Failed to convert a NumPy array to a Tensor (Unsupported object type float)
in Python'
```
The code I have, including the preprocess of the dataset, is the following:
```
import pandas as pd
from tensorflow.keras.utils import get_file
pd.set_option('display.max_columns', 6)
pd.set_option('display.max_rows', 5)
dfs = []
for i in range(1,5):
path = './UNSW-NB15_{}.csv'# There are 4 input csv files
dfs.append(pd.read_csv(path.format(i), header = None))
all_data = pd.concat(dfs).reset_index(drop=True) # Concat all to a single df
# This csv file contains names of all the features
df_col = pd.read_csv('./NUSW-NB15_features.csv', encoding='ISO-8859-1')
# Making column names lower case, removing spaces
df_col['Name'] = df_col['Name'].apply(lambda x: x.strip().replace(' ', '').lower())
# Renaming our dataframe with proper column names
all_data.columns = df_col['Name']
# display 5 rows
pd.set_option('display.max_columns', 48)
pd.set_option('display.max_rows', 21)
all_data
all_data['attack_cat'] = all_data['attack_cat'].str.strip()
all_data['attack_cat'] = all_data['attack_cat'].replace(['Backdoors'], 'Backdoor')
all_data.groupby('attack_cat')['attack_cat'].count()
all_data["attack_cat"] = all_data["attack_cat"].fillna('Normal')
all_data.groupby('attack_cat')['attack_cat'].count()
all_data.drop(all_data[all_data['is_ftp_login'] >= 2.0].index, inplace = True)
all_data.drop(['srcip', 'sport', 'dstip', 'dsport'],axis=1, inplace=True)
df = pd.concat([all_data,pd.get_dummies(all_data['proto'],prefix='proto')],axis=1)
df.drop('proto',axis=1, inplace=True)
df_2 = pd.concat([df,pd.get_dummies(df['state'],prefix='state')],axis=1)
df_2.drop('state',axis=1, inplace=True)
df_encoded = pd.concat([df_2,pd.get_dummies(df_2['service'],prefix='service')],axis=1)
df_encoded.drop('service',axis=1, inplace=True)
df_encoded['ct_flw_http_mthd'] = df_encoded['ct_flw_http_mthd'].fillna(0)
df_encoded['is_ftp_login'] = df_encoded['is_ftp_login'].fillna(0)
df = pd.DataFrame(df_encoded)
temp_cols=df_encoded.columns.tolist()
index=df.columns.get_loc("attack_cat")
new_cols=temp_cols[0:index] + temp_cols[index+1:] + temp_cols[index:index+1]
df=df_encoded[new_cols]
df_encoded = df.drop('label', axis=1)
x_columns = df_encoded.columns.drop('attack_cat')
x = df_encoded[x_columns].values
dummies = pd.get_dummies(df['attack_cat'])
products = dummies.columns
y = dummies.values
import numpy as np
import tensorflow.keras
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
from sklearn.model_selection import train_test_split
from tensorflow.keras.callbacks import EarlyStopping
from sklearn import metrics
x_train, x_test, y_train, y_test = train_test_split(
x, y, test_size=0.25, random_state=42)
model = Sequential()
model.add(Dense(10, input_dim= x.shape[1], activation= 'relu'))
model.add(Dense(9, activation= 'relu'))
model.add(Dense(9,activation= 'relu'))
model.add(Dense(y_train.shape[1],activation= 'softmax', kernel_initializer='normal'))
model.compile(loss= 'categorical_crossentropy', optimizer= 'adam', metrics= ['accuracy'])
monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5,
verbose=1, mode='auto', restore_best_weights=True)
model.fit(x_train,y_train,validation_data=(x_test,y_test),
callbacks=[monitor],verbose=2, epochs=1000)
pred = model.predict(x_test)
pred = np.argmax(pred,axis=1)
y_compare = np.argmax(y_test,axis=1)
score = metrics.accuracy_score(y_compare, pred)
print("Accuracy score: {}".format(score))
```
The dataset i'm using is the UNSW-NB15 (2+ million inputs)
The error appears after executing the last block of code (begins at import numpy as np)
Thanks for any tip that you can give me to solve the problem.
The error appearing after the update provided by Muhammad is the following:
```
ValueError Traceback (most recent call last)
Input In [13], in <cell line: 22>()
18 model.compile(loss= 'categorical_crossentropy', optimizer= 'adam', metrics= ['accuracy'])
19 monitor = EarlyStopping(monitor='val_loss', min_delta=1e-3, patience=5,
20 verbose=1, mode='auto', restore_best_weights=True)
---> 22 model.fit(tf.cast(x_train,dtype=tf.float32),y_train,validation_data=(x_test,y_test),
23 callbacks=[monitor],verbose=2, epochs=1000)
26 pred = model.predict(x_test)
27 pred = np.argmax(pred,axis=1)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\util\dispatch.py:206, in add_dispatch_support.<locals>.wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
209 # TypeError, when given unexpected types. So we need to catch both.
210 result = dispatch(wrapper, args, kwargs)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\ops\math_ops.py:988, in cast(x, dtype, name)
982 x = ops.IndexedSlices(values_cast, x.indices, x.dense_shape)
983 else:
984 # TODO(josh11b): If x is not already a Tensor, we could return
985 # ops.convert_to_tensor(x, dtype=dtype, ...) here, but that
986 # allows some conversions that cast() can't do, e.g. casting numbers to
987 # strings.
--> 988 x = ops.convert_to_tensor(x, name="x")
989 if x.dtype.base_dtype != base_type:
990 x = gen_math_ops.cast(x, base_type, name=name)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\profiler\trace.py:163, in trace_wrapper.<locals>.inner_wrapper.<locals>.wrapped(*args, **kwargs)
161 with Trace(trace_name, **trace_kwargs):
162 return func(*args, **kwargs)
--> 163 return func(*args, **kwargs)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\ops.py:1566, in convert_to_tensor(value, dtype, name, as_ref, preferred_dtype, dtype_hint, ctx, accepted_result_types)
1561 raise TypeError("convert_to_tensor did not convert to "
1562 "the preferred dtype: %s vs %s " %
1563 (ret.dtype.base_dtype, preferred_dtype.base_dtype))
1565 if ret is None:
-> 1566 ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
1568 if ret is NotImplemented:
1569 continue
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\tensor_conversion_registry.py:52, in _default_conversion_function(***failed resolving arguments***)
50 def _default_conversion_function(value, dtype, name, as_ref):
51 del as_ref # Unused.
---> 52 return constant_op.constant(value, dtype, name=name)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:271, in constant(value, dtype, shape, name)
174 @tf_export("constant", v1=[])
175 def constant(value, dtype=None, shape=None, name="Const"):
176 """Creates a constant tensor from a tensor-like object.
177
178 Note: All eager `tf.Tensor` values are immutable (in contrast to
(...)
269 ValueError: if called on a symbolic tensor.
270 """
--> 271 return _constant_impl(value, dtype, shape, name, verify_shape=False,
272 allow_broadcast=True)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:283, in _constant_impl(value, dtype, shape, name, verify_shape, allow_broadcast)
281 with trace.Trace("tf.constant"):
282 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
--> 283 return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
285 g = ops.get_default_graph()
286 tensor_value = attr_value_pb2.AttrValue()
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:308, in _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
306 def _constant_eager_impl(ctx, value, dtype, shape, verify_shape):
307 """Creates a constant on the current device."""
--> 308 t = convert_to_eager_tensor(value, ctx, dtype)
309 if shape is None:
310 return t
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\framework\constant_op.py:106, in convert_to_eager_tensor(value, ctx, dtype)
104 dtype = dtypes.as_dtype(dtype).as_datatype_enum
105 ctx.ensure_initialized()
--> 106 return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Failed to convert a NumPy array to a Tensor (Unsupported object type float).
```
Column types :
```
df_encoded.info(verbose=True)
<class 'pandas.core.frame.DataFrame'>
Int64Index: 2539861 entries, 0 to 2540046
Data columns (total 205 columns):
# Column Dtype
--- ------ -----
0 dur float64
1 sbytes int64
2 dbytes int64
3 sttl int64
4 dttl int64
5 sloss int64
6 dloss int64
7 sload float64
8 dload float64
9 spkts int64
10 dpkts int64
11 swin int64
12 dwin int64
13 stcpb int64
14 dtcpb int64
15 smeansz int64
16 dmeansz int64
17 trans_depth int64
18 res_bdy_len int64
19 sjit float64
20 djit float64
21 stime int64
22 ltime int64
23 sintpkt float64
24 dintpkt float64
25 tcprtt float64
26 synack float64
27 ackdat float64
28 is_sm_ips_ports int64
29 ct_state_ttl int64
30 ct_flw_http_mthd float64
31 is_ftp_login float64
32 ct_ftp_cmd object
33 ct_srv_src int64
34 ct_srv_dst int64
35 ct_dst_ltm int64
36 ct_src_ltm int64
37 ct_src_dport_ltm int64
38 ct_dst_sport_ltm int64
39 ct_dst_src_ltm int64
40 proto_3pc uint8
41 proto_a/n uint8
42 proto_aes-sp3-d uint8
43 proto_any uint8
44 proto_argus uint8
45 proto_aris uint8
46 proto_arp uint8
47 proto_ax.25 uint8
48 proto_bbn-rcc uint8
49 proto_bna uint8
50 proto_br-sat-mon uint8
51 proto_cbt uint8
52 proto_cftp uint8
53 proto_chaos uint8
54 proto_compaq-peer uint8
55 proto_cphb uint8
56 proto_cpnx uint8
57 proto_crtp uint8
58 proto_crudp uint8
59 proto_dcn uint8
60 proto_ddp uint8
61 proto_ddx uint8
62 proto_dgp uint8
63 proto_egp uint8
64 proto_eigrp uint8
65 proto_emcon uint8
66 proto_encap uint8
67 proto_esp uint8
68 proto_etherip uint8
69 proto_fc uint8
70 proto_fire uint8
71 proto_ggp uint8
72 proto_gmtp uint8
73 proto_gre uint8
74 proto_hmp uint8
75 proto_i-nlsp uint8
76 proto_iatp uint8
77 proto_ib uint8
78 proto_icmp uint8
79 proto_idpr uint8
80 proto_idpr-cmtp uint8
81 proto_idrp uint8
82 proto_ifmp uint8
83 proto_igmp uint8
84 proto_igp uint8
85 proto_il uint8
86 proto_ip uint8
87 proto_ipcomp uint8
88 proto_ipcv uint8
89 proto_ipip uint8
90 proto_iplt uint8
91 proto_ipnip uint8
92 proto_ippc uint8
93 proto_ipv6 uint8
94 proto_ipv6-frag uint8
95 proto_ipv6-no uint8
96 proto_ipv6-opts uint8
97 proto_ipv6-route uint8
98 proto_ipx-n-ip uint8
99 proto_irtp uint8
100 proto_isis uint8
101 proto_iso-ip uint8
102 proto_iso-tp4 uint8
103 proto_kryptolan uint8
104 proto_l2tp uint8
105 proto_larp uint8
106 proto_leaf-1 uint8
107 proto_leaf-2 uint8
108 proto_merit-inp uint8
109 proto_mfe-nsp uint8
110 proto_mhrp uint8
111 proto_micp uint8
112 proto_mobile uint8
113 proto_mtp uint8
114 proto_mux uint8
115 proto_narp uint8
116 proto_netblt uint8
117 proto_nsfnet-igp uint8
118 proto_nvp uint8
119 proto_ospf uint8
120 proto_pgm uint8
121 proto_pim uint8
122 proto_pipe uint8
123 proto_pnni uint8
124 proto_pri-enc uint8
125 proto_prm uint8
126 proto_ptp uint8
127 proto_pup uint8
128 proto_pvp uint8
129 proto_qnx uint8
130 proto_rdp uint8
131 proto_rsvp uint8
132 proto_rtp uint8
133 proto_rvd uint8
134 proto_sat-expak uint8
135 proto_sat-mon uint8
136 proto_sccopmce uint8
137 proto_scps uint8
138 proto_sctp uint8
139 proto_sdrp uint8
140 proto_secure-vmtp uint8
141 proto_sep uint8
142 proto_skip uint8
143 proto_sm uint8
144 proto_smp uint8
145 proto_snp uint8
146 proto_sprite-rpc uint8
147 proto_sps uint8
148 proto_srp uint8
149 proto_st2 uint8
150 proto_stp uint8
151 proto_sun-nd uint8
152 proto_swipe uint8
153 proto_tcf uint8
154 proto_tcp uint8
155 proto_tlsp uint8
156 proto_tp++ uint8
157 proto_trunk-1 uint8
158 proto_trunk-2 uint8
159 proto_ttp uint8
160 proto_udp uint8
161 proto_udt uint8
162 proto_unas uint8
163 proto_uti uint8
164 proto_vines uint8
165 proto_visa uint8
166 proto_vmtp uint8
167 proto_vrrp uint8
168 proto_wb-expak uint8
169 proto_wb-mon uint8
170 proto_wsn uint8
171 proto_xnet uint8
172 proto_xns-idp uint8
173 proto_xtp uint8
174 proto_zero uint8
175 state_ACC uint8
176 state_CLO uint8
177 state_CON uint8
178 state_ECO uint8
179 state_ECR uint8
180 state_FIN uint8
181 state_INT uint8
182 state_MAS uint8
183 state_PAR uint8
184 state_REQ uint8
185 state_RST uint8
186 state_TST uint8
187 state_TXD uint8
188 state_URH uint8
189 state_URN uint8
190 state_no uint8
191 service_- uint8
192 service_dhcp uint8
193 service_dns uint8
194 service_ftp uint8
195 service_ftp-data uint8
196 service_http uint8
197 service_irc uint8
198 service_pop3 uint8
199 service_radius uint8
200 service_smtp uint8
201 service_snmp uint8
202 service_ssh uint8
203 service_ssl uint8
204 attack_cat object
dtypes: float64(12), int64(27), object(2), uint8(164)
memory usage: 1.2+ GB
```
New error after removing ct_ftp_cmd :
```
TypeError Traceback (most recent call last)
Input In [17], in <cell line: 12>()
8 from sklearn import metrics
10 x_cast = tf.cast(x,dtype=tf.float32)
---> 12 x_train, x_test, y_train, y_test = train_test_split(
13 x_cast, y, test_size=0.25, random_state=42)
16 model = Sequential()
17 model.add(Dense(10, input_dim= x.shape[1], activation= 'relu'))
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\model_selection\_split.py:2443, in train_test_split(test_size, train_size, random_state, shuffle, stratify, *arrays)
2439 cv = CVClass(test_size=n_test, train_size=n_train, random_state=random_state)
2441 train, test = next(cv.split(X=arrays[0], y=stratify))
-> 2443 return list(
2444 chain.from_iterable(
2445 (_safe_indexing(a, train), _safe_indexing(a, test)) for a in arrays
2446 )
2447 )
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\model_selection\_split.py:2445, in <genexpr>(.0)
2439 cv = CVClass(test_size=n_test, train_size=n_train, random_state=random_state)
2441 train, test = next(cv.split(X=arrays[0], y=stratify))
2443 return list(
2444 chain.from_iterable(
-> 2445 (_safe_indexing(a, train), _safe_indexing(a, test)) for a in arrays
2446 )
2447 )
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\utils\__init__.py:378, in _safe_indexing(X, indices, axis)
376 return _pandas_indexing(X, indices, indices_dtype, axis=axis)
377 elif hasattr(X, "shape"):
--> 378 return _array_indexing(X, indices, indices_dtype, axis=axis)
379 else:
380 return _list_indexing(X, indices, indices_dtype)
File ~\miniconda3\envs\pruebas\lib\site-packages\sklearn\utils\__init__.py:202, in _array_indexing(array, key, key_dtype, axis)
200 if isinstance(key, tuple):
201 key = list(key)
--> 202 return array[key] if axis == 0 else array[:, key]
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\util\dispatch.py:206, in add_dispatch_support.<locals>.wrapper(*args, **kwargs)
204 """Call target, and fall back on dispatchers if there is a TypeError."""
205 try:
--> 206 return target(*args, **kwargs)
207 except (TypeError, ValueError):
208 # Note: convert_to_eager_tensor currently raises a ValueError, not a
209 # TypeError, when given unexpected types. So we need to catch both.
210 result = dispatch(wrapper, args, kwargs)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\ops\array_ops.py:1014, in _slice_helper(tensor, slice_spec, var)
1012 new_axis_mask |= (1 << index)
1013 else:
-> 1014 _check_index(s)
1015 begin.append(s)
1016 end.append(s + 1)
File ~\miniconda3\envs\pruebas\lib\site-packages\tensorflow\python\ops\array_ops.py:888, in _check_index(idx)
883 dtype = getattr(idx, "dtype", None)
884 if (dtype is None or dtypes.as_dtype(dtype) not in _SUPPORTED_SLICE_DTYPES or
885 idx.shape and len(idx.shape) == 1):
886 # TODO(slebedev): IndexError seems more appropriate here, but it
887 # will break `_slice_helper` contract.
--> 888 raise TypeError(_SLICE_TYPE_ERROR + ", got {!r}".format(idx))
TypeError: Only integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got array([ 214948, 2349007, 452929, ..., 2356330, 2229084, 2219110])
```
```
|
Failed to convert a NumPy array to a Tensor (Unsupported object type float) in Python
|
CC BY-SA 4.0
| null |
2022-11-07T14:38:49.383
|
2022-11-08T11:42:05.287
|
2022-11-08T11:19:24.693
|
142495
|
142495
|
[
"keras",
"tensorflow",
"mlp"
] |
Add tf.keras.regularizer.l1(0.1) to your Dense Layers. May be this shall increase the number of your epochs and try to run it in the COLAB Setup under the GPU.
|
Failed to convert a NumPy array to a Tensor
|
It looks like the model is expecting float input. Try converting to float using astype:
X = np.asarray(X).astype(np.float32)
|
116021
|
1
|
116024
| null |
5
|
346
|
Is applying dropout equivalent to zeroing output of random neurons in each mini-batch iteration and leaving rest of forward and backward steps in back-propagation unchanged? I'm implementing network from scratch in `numpy`.
|
In neural networks, is applying dropout the same as zeroing random neurons?
|
CC BY-SA 4.0
| null |
2022-11-09T10:06:47.627
|
2023-01-30T12:37:21.860
|
2023-01-30T12:37:21.860
|
102852
|
8237
|
[
"deep-learning",
"neural-network",
"dropout"
] |
Indeed. To be precise, the dropout operation will randomly zero some of the input tensor elements with probability $p$, and furthermore the rest of the non-dropped out outputs are scaled by a factor of $\frac{1}{1-p}$ during training.
For example, see how elements of each tensor in the input (top tensor in output) are zeroed in the output tensor (bottom tensor in output) using pytorch.
```
m = nn.Dropout(p=0.5)
input = torch.randn(3, 4)
output = m(input)
print(input, '\n', output)
>>> tensor([[-0.9698, -0.9397, 1.0711, -1.4557],
>>> [-0.0249, -0.9614, -0.7848, -0.8345],
>>> [ 0.9420, 0.6565, 0.4437, -0.2312]])
>>> tensor([[-0.0000, -0.0000, 2.1423, -0.0000],
>>> [-0.0000, -0.0000, -1.5695, -1.6690],
>>> [ 0.0000, 0.0000, 0.0000, -0.0000]])
```
EDIT: please note the post has been updated to reflect Todd Sewell's addition in the comments.
|
How does dropout work during testing in neural network?
|
During training, p neuron activations (usually, p=0.5, so 50%) are dropped. Doing this at the testing stage is not our goal (the goal is to achieve a better generalization). From the other hand, keeping all activations will lead to an input that is unexpected to the network, more precisely, too high (50% higher) input activations for the following layer.
[](https://i.stack.imgur.com/kUc8r.jpg)
Consider the neurons at the output layer. During training, each neuron usually get activations only from two neurons from the hidden layer (while being connected to four), due to dropout. Now, imagine we finished the training and remove dropout. Now activations of the output neurons will be computed based on four values from the hidden layer. This is likely to put the output neurons in unusual regime, so they will produce too large absolute values, being overexcited.
To avoid this, the trick is to multiply the input connections' weights of the last layer by 1-p (so, by 0.5). Alternatively, one can multiply the outputs of the hidden layer by 1-p, which is basically the same.
|
116051
|
1
|
116068
| null |
1
|
14
|
I am working on a classification model using one of the following three algorithms: RandomForestClassifier, a TensorFlow model and a LogisticRegression model.
The data set I am working with has a feature that is represented by a single word that uses ASCII characters (may or may not be a valid word in any language). I don't see any advantage in treating this column as categorical data since `number of unique words/total number of rows` is very close to 1, i.e., almost every word is unique.
Is there any obvious way to use this column to improve the predictive capabilities of the resulting classification model?
The data I am working with are player IDs that are strings and are meaningless in any language. But would the answer to the above question change if I were working with single English words?
|
Predictive value of short text fields
|
CC BY-SA 4.0
| null |
2022-11-10T02:59:28.987
|
2022-11-10T16:01:43.607
| null | null |
140746
|
[
"classification",
"training",
"text"
] |
In general there is no reason to include a meaningless id as feature, it has no semantic value.
At the semantic level, the question is whether knowing this information provides any help with knowing the target label. In other words, would a human expert be able to use this information? If not it's very unlikely to be relevant.
At the technical level, you can measure the amount of information brought by this variable about the target, for example with conditional entropy.
|
What is considered short and long text in NLP (document similarity)
|
As Erwan said in the comments, it depends. In my experience, it depends specifically on two things:
Tokenization method: The length of a document in number of tokens will vary considerably depending on how you split it up. Splitting your text into individual characters will result in a longer document than splitting it into sub-word units (e.g. WordPiece), which will still be longer than splitting on white space.
Model: Vanishing gradients aside, an RNN doesn't care how long the input text is, it will just keep chugging along. Transformers, however, are limited. BERT can realistically handle sequences of up to 512 WordPiece units, while the LongFormer claims to handle sequences of up to 32k units (given sufficient compute resources). Thus your documents of 10 - 600 tokens would be long for BERT but short for the LongFormer.
Whether you should treat documents of length 10 differently from those of length 600 is not something I can answer without knowing the details of your specific task. Intuitively, I doubt a very short document would ever be very similar to a much longer one, simply because it likely contains less content.
|
116059
|
1
|
116066
| null |
0
|
54
|
I am relatively new to neural networks and AI, and I have a question regarding the training method in such networks. In particular spiking neural networks (SNNs) are the type we are working with.
I am confused with the best way to train spiking neural networks when high accuracy is the most desired performance metric I am working towards.
For context, we are doing supervised learning with a SNN as an anomaly detector to classify various input data samples, inputted as spike trains, into 2 classes: Healthy and Unhealthy. Our training data has one healthy input sample that we want the SNN to recognise as healthy, and we make up random unhealthy input samples that we want the SNN to recognise as unhealthy. This leads to my question:
How should you train a SNN? Take an example where you have a training dataset with 100 samples and say 50% are healthy and the other 50% are unhealthy, how should this network be trained in terms of the ratio of healthy and unhealthy training samples used to train?
Do you need more than epoch, or iterations?
Should you leave some training samples unshown to the SNN for testing?
And as I only have one healthy sample, will this work?
|
Advice on How to Train Neural Networks
|
CC BY-SA 4.0
| null |
2022-11-10T09:58:28.047
|
2022-11-10T15:38:38.573
| null | null |
142630
|
[
"neural-network",
"supervised-learning"
] |
There are many ways to train SNNs. This publication explains a few of them:
[https://arxiv.org/pdf/2109.12894.pdf](https://arxiv.org/pdf/2109.12894.pdf)
However, we can start with some useful tips.
SNNs highly depend on a variable threshold (according to max values), the learning rate, and the number of spikes per sample (impacts the weights training & prediction). You will want to make several trials to find the right parameters' values, and the right amount of iterations and checks before reaching a good result.
In addition to that, 150 samples could be enough as soon as they cover most cases. I don't know the data, so I can only speak in general terms.
Finally, weights initialization also plays an important role: testing several weights initialization could be necessary to reach good results.
Here are some codes that could be helpful:
[https://github.com/fangwei123456/spikingjelly](https://github.com/fangwei123456/spikingjelly)
[https://github.com/Shikhargupta/Spiking-Neural-Network](https://github.com/Shikhargupta/Spiking-Neural-Network)
|
How Do I Learn Neural Networks?
|
I have a Master's in Computer Science and my thesis was about time-series prediction using Neural Networks.
The book [Hands on machine learning with Scikit and Tensorflow](https://rads.stackoverflow.com/amzn/click/1491962291) was extremely helpful from a practical point of view. It really lays things very clearly, without much theory and math. I strongly recommend it.
On the other hand, the [book](https://www.deeplearningbook.org/) by Ian Goodfellow is also a must (kind of the bible of DL). There you'll find the theoretical explanations, also it will leave you much much more knowledgeable with regards to deep learning and the humble beginning of the field till now.
Another, as others have suggested, is of course, [Deep Learning with Python](https://rads.stackoverflow.com/amzn/click/1617294438) by Chollet. I indulged reading this book. Indeed it was very well written, and again, it teaches you tricks and concepts that you hardly grasp from tutorials and courses online.
Furthermore, I see you are familiar with Matlab, so maybe you have taken some stats/probability classes, otherwise, all these will overwhelm you a bit.
|
116090
|
1
|
116105
| null |
0
|
55
|
I created an ML model to classify five IoT signals (say A, B, C, D, and E) I get in CSV files monthly. Each signal has a value in the sampled timestamps.
My questions (doubts) are:
- Do I have to preprocess new data in the production only on the same (in this example, daily) timestamp; in other words, only the same number of values (features) for each time-series sample as during the model's training? I am pretty sure that is true, but I wonder if there is something specific to the time series.
- Since my data are normalized and standardized, what would be the suggestion regarding the length of the time series, since that is important for the standardization of input data in the model in the production environment?
During the training, I divided the values on the daily time stamp (say 5000 values for each signal in a day). So, my time-series are a daily basis time-stamp. I have finished the training, and the results on the test dataset and with cross-validation are acceptable for production. However, I would like not to make a mistake in directions for the data acquiring team.
|
Time-series classification in a production environment - Doubts
|
CC BY-SA 4.0
| null |
2022-11-11T14:10:24.350
|
2022-11-16T17:13:20.703
| null | null |
142686
|
[
"machine-learning",
"deep-learning",
"classification",
"time-series"
] |
Applying time series for IoTs could be quite complex because you have to deal with model constraints (in general, it can't process too many data), business constraints (what to do you want to predict and with which accuracy), and device constraints (sensors could have different calibrations and the components are not 100% identical).
So the first step would be to define a time range between 50 and 200 steps with a clear time limit for a starting and an ending, ideally, that corresponds to a cycle.
I recommend starting with simple business objectives because you have already a lot of complexity due to the model and devices. It also applies to devices: starting studying one device could be more efficient to understand the main behaviors and add more devices progressively.
Then you have to choose the right model. Random Forest is quite universal and could take into account several variables.
[https://pyts.readthedocs.io/en/latest/generated/pyts.classification.TimeSeriesForest.html](https://pyts.readthedocs.io/en/latest/generated/pyts.classification.TimeSeriesForest.html)
LSTMs are great to learn patterns, but they are quite sensitive to noise. You may have to know your devices very well to smooth their signal correctly.
[https://www.analyticsvidhya.com/blog/2019/01/introduction-time-series-classification/](https://www.analyticsvidhya.com/blog/2019/01/introduction-time-series-classification/)
[https://www.kaggle.com/code/meaninglesslives/simple-neural-net-for-time-series-classification](https://www.kaggle.com/code/meaninglesslives/simple-neural-net-for-time-series-classification)
Sktime could be interesting:
[https://www.sktime.org/en/v0.9.0/examples/02_classification_univariate.html](https://www.sktime.org/en/v0.9.0/examples/02_classification_univariate.html)
Note that if the classification rules apply to any IoT, it is not a multi-variate case, but rather a pattern recognition applicable to any similar device. However, be aware that the devices should be comparable enough to make good classification (= data normalization and maybe noise reduction).
|
How does time-series classification work?
|
To be clear, here, time-series classification refers to forecasting discrete values. In another context, time-series classification could refer to predicting a single class for the entire time-series (i.e. heart disease vs healthy heart). Thanks @mloning for pointing this out.
When it comes to forecasting discrete outputs, models are trained to predict the next value based on the previous ones, which means that
- The input is the historical data up to timestamp t (in your scenario, the data up to week = 201904)
- The output is the value at timestamp t + 1 (y when week = 201905)
If you want to predict more than 1 value into the future, you should perform predictions in a recurrent way, i.e.:
- use data up to t to predict t + 1
- add your prediction to your data
- use data up to t + 1 (where t + 1 is your own prediction) to predict t + 2
- and so on
How far into the future you want to look is called the horizon. And you are free to use as big of a horizon as you like. Of course, since every new step into the future is based on guesses and not actual data, it is expected that the further you look, the worse your predictions will become. Which makes perfect sense. That's why it's easier to predict what the weather will be like tomorrow than in 7 days.
Splitting data into training and testing is not very different than for a normal classification problem. The only constraint is that your test data should only contain data "in the future" compared to the training data. Here is a good blog about it: [https://towardsdatascience.com/time-based-cross-validation-d259b13d42b8](https://towardsdatascience.com/time-based-cross-validation-d259b13d42b8)
---
- should I train the model for each time series independently?
Short answer: try both approaches and see which one works best
Long answer:
All of this depends on whether you believe that each time-series represent a different "process" or "pattern". It could be that a single model could make good predictions for either time-series. It could also be that both time-series represent different processes and therefore, it might be wiser to train different models for each.
Think about it like a weather forecast. I could have time-series data for a bunch of cities. I could train a model specific to each city, but it might also help to train a model using data from all cities so that it can capture generic weather patterns. This can help if all of a sudden, I need to predict the weather for a new city I didn't know about. Since I didn't know about it, I don't have a model specifically for it, so I would need to reuse one, which ideally should be general enough.
|
116101
|
1
|
116111
| null |
0
|
21
|
If I have text data where the length of documents greatly varies and I'd like to use it for training where I use batching, there is a great chance that long strings will be mixed with short strings and the average time to process each batch will increase because of padding within the batches.
I imagine sorting documents naively by length would create a bias of some sort since long documents and short one would tend to be similar to each other.
Are there any methods that have been tried that can help reduce training time in this case without sacrificing model performance?
|
Ordering training text data by length
|
CC BY-SA 4.0
| null |
2022-11-12T04:34:21.167
|
2022-11-12T11:45:50.537
| null | null |
142701
|
[
"machine-learning",
"nlp",
"bert",
"text-classification",
"performance"
] |
What you are referring to is called "bucketing". It consists of creating batches of sequences with similar length, to minimize the needed padding.
In tensorflow, you can do it with [tf.data.Dataset.bucket_by_sequence_length](https://www.tensorflow.org/api_docs/python/tf/data/Dataset#bucket_by_sequence_length). Take into account that previously it was in different python packages (`tf.data.experimental.bucket_by_sequence_length`, `tf.contrib.data.bucket_by_sequence_length`), so the examples online may containt the outdated name.
To see some usage examples, you can check [this jupyter notebook](https://github.com/wcarvalho/jupyter_notebooks/blob/ebe762436e2eea1dff34bbd034898b64e4465fe4/tf.bucket_by_sequence_length/bucketing%20practice.ipynb), or [other answers](https://stackoverflow.com/a/50608469/674487) [in stackoverflow](https://stackoverflow.com/a/54129446/674487), or [this tutorial](https://medium.com/analytics-vidhya/tutorial-on-bucket-by-sequence-length-api-for-efficiently-batching-nlp-data-while-training-20d8ef5219d7).
|
How to order the data with respect to data type
|
Let your data frame be `df`. First get the numeric columns:
```
num_col = df.select_dtypes('number').columns
```
Then get the remaining columns.
```
non_num_col = set(df.columns) - set(df.select_dtypes('number').columns)
```
Merge as required.
```
df = pd.concat([df[num_col], df[list(non_num_col)]], axis=1)
```
The columns are now in the desired sequence.
|
116133
|
1
|
116154
| null |
0
|
27
|
I have performed a clustering with geospatial data with the dbscan algorithm. You can see the project and the code in more detail here: [https://notebook.community/gboeing/urban-data-science/15-Spatial-Cluster-Analysis/cluster-analysis](https://notebook.community/gboeing/urban-data-science/15-Spatial-Cluster-Analysis/cluster-analysis)
I would like to calculate the following in a dataframe:
- the area of each cluster. It can be calculated as: (lat_max - lat_min) * (lon_max - lon_min)
- number of points belonging to each cluster
At the moment I have added to the original dataset a column with the cluster to which the coordinate belongs.
```
for n in range(num_clusters):
df['cluster'] = pd.Series(cluster_labels, index=df.index)
```
Any idea of simple code that would allow me to do this?
|
How to perform some calculations after dbscan clustering
|
CC BY-SA 4.0
| null |
2022-11-13T10:58:43.273
|
2022-11-14T09:14:53.643
| null | null |
142221
|
[
"python",
"dbscan"
] |
A simple solution is to apply Voronoi Diagrams to the DB Scan clusters:
[https://www.arianarab.com/post/unsupervised-point-pattern-clustering-using-voronoi-tessellation-and-density-based-scan-algorithms](https://www.arianarab.com/post/unsupervised-point-pattern-clustering-using-voronoi-tessellation-and-density-based-scan-algorithms)
You can get the polygon coordinates and calculate the polygon area like this:
```
import numpy as np
x = np.arange(0,1,0.001)
y = np.sqrt(1-x**2)
def PolyArea(x,y):
return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1)))
```
Sources:
[https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates](https://stackoverflow.com/questions/24467972/calculate-area-of-polygon-given-x-y-coordinates)
[https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Voronoi.html](https://docs.scipy.org/doc/scipy/reference/generated/scipy.spatial.Voronoi.html)
|
How can we evaluate DBSCAN parameters?
|
[OPTICS](http://www.dbs.informatik.uni-muenchen.de/Publikationen/Papers/OPTICS.pdf) gets rid of $\varepsilon$, you might want to have a look at it. Especially the reachability plot is a way to visualize what good choices of $\varepsilon$ in DBSCAN might be.
Wikipedia ([article](https://en.wikipedia.org/wiki/OPTICS_algorithm)) illustrates it pretty well. The image on the top left shows the data points, the image on the bottom left is the reachability plot:
[](https://i.stack.imgur.com/TamFM.png)
The $y$-axis are different values for $\varepsilon$, the valleys are the clusters. Each "bar" is for a single point, where the height of the bar is the minimal distance to the already printed points.
|
116142
|
1
|
116283
| null |
0
|
197
|
In order to make a classifier dead easy to understand/interpret, I want to classify tabular data (with `n` columns) according to a set of nested rules, with the constraint that the number of decision nodes is equal to the depth of the tree. Given a vector `x` with `n` components, the classification logic will thus look like:
```
def classify(x):
if x[0] < t_0:
if x[1] < t_1:
if x[2] < t_2:
if x[3] < t_3:
...
if x[m-1] < t_m_minus_1:
return a_m_minus_1
else:
return a_m_minus_2
...
else:
return a_3
else:
return a_2
else:
return a_1
else:
return a_0
```
with `m <= n`, so that the none of `else` statements can contain an `if` statement nested in it. As a consequence the number `m` of decision nodes will be equal to the depth of the tree.
Graphically (in the case `m = 3`), this will look like:
[](https://i.stack.imgur.com/kcsfF.png)
A Sankey diagram can also help visualize this:
[](https://i.stack.imgur.com/1JPRz.png)
Incidentally, I would like to use this classifier as a multi-class classifier (so not necessarily binary).
Also, ideally I'd like "else" leaf nodes to have very low Gini impurity index. Each node should be split according to a condition on a single feature only.
Is there a way to train a decision tree in scikit-learn while enforcing this constraint? What could be another library/approach to optimize such a classifier? (I would avoid coding a greedy algorithm from scratch if that's reinventing the wheel).
|
How can I train a decision tree constrained to have number of decision nodes = tree depth?
|
CC BY-SA 4.0
| null |
2022-11-13T18:06:33.717
|
2022-12-01T17:37:33.350
|
2022-12-01T17:37:33.350
|
50519
|
50519
|
[
"classification",
"scikit-learn",
"decision-trees"
] |
The structure you want seems to be expressable with an ordered series of if, else if, ... statements. This is a common structure for interpretable models, often called a Rule List, or Decision List. It is discussed in chapter ["Rules" in the book Interpretable Machine Learning](https://christophm.github.io/interpretable-ml-book/rules.html) by Christoph Molnar.
There are several Python libraries that implements learning of a Rule List.
The [imodels](https://github.com/csinva/imodels) library in the submodule [imodels.rule_list](https://csinva.io/imodels/rule_list/index.html) implements many methods that can produce Rule List models, such as Optimal rule list (CORELS), Bayesian rule list, Greedy rule list and OneR rule list.
The GreedyRuleListClassifier is probably the closest to your intent, the authors call it "like a decision tree that only ever splits going left".
OneR only considers one feature in total, which is an additional restriction.
The Optimal Rule list and Bayesian rule lists requires discretizing continuous features. This can for example be done using quantile binning, or another model to find relevant/candidate breakpoints. So it is considerably more involved, but may lead to better decision lists, especially if using the probabilistic outputs.
Example code for a `GreedyRuleListClassifier` may go as follows:
```
import pandas
import sklearn
import sklearn.datasets
from sklearn.model_selection import train_test_split
from imodels import GreedyRuleListClassifier
X, Y = sklearn.datasets.load_breast_cancer(as_frame=True, return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.3, random_state=4)
# NOTE: fitting sometimes fails with an Exception, or gives model with very bad performance
# Should attempt multiple fits and keep the best one, as estimated per a validation set
model = GreedyRuleListClassifier(max_depth=10)
model.fit(X_train, y_train, feature_names=X_train.columns)
y_pred = model.predict(X_test)
from sklearn.metrics import accuracy_score
score = accuracy_score(y_test.values,y_pred)
print('Accuracy:\n', score)
print('Rule list:\n')
print(model)
```
Should output something like
```
Accuracy:
0.631578947368421
Rule list:
mean 0.603 (398 pts)
if worst area >= 869.3 then 0.908 (262 pts)
mean 0.015 (136 pts)
if worst texture >= 16.67 then 1.0 (1 pts)
mean 0.007 (135 pts)
if area error >= 22.18 then 0.2 (5 pts)
mean 0 (130 pts)
```
Here is a quick attempt at visualizing this with a Sankey diagram.
```
def plot_decision_rules_sankey(ax, rules):
# https://matplotlib.org/stable/api/sankey_api.html
# TODO: read the arguments
from matplotlib.sankey import Sankey
def format_rule(r):
op = '>='
s = f"{r['col']}\n {op} {r['cutoff']}\np={r['val_right']:.2f}"
return s
df = pandas.DataFrame.from_records(model.rules_)
print(df)
df = df.dropna()
df['label'] = df.apply(format_rule, axis=1)
df['orientation'] = [1] * len(df)
df['out'] = df['num_pts'] / df['num_pts'].sum()
p = Sankey(ax=ax,
margin=0.0,
format='',
flows=[0.0] + list(df['out'] * -1),
labels=['Input'] + list(df['label']),
orientations=[0] + list(df['orientation']),
).finish()
ax.axis('off')
from matplotlib import pyplot as plt
fig, ax = plt.subplots(1, figsize=(8, 6))
plot_decision_rules_sankey(ax, model.rules_)
fig.tight_layout()
fig.savefig('decision-rules-sankey.png')
```
[](https://i.stack.imgur.com/54FMT.png)
|
What are the factors to consider when setting the depth of a decision tree?
|
Yes, but it also means you're likely to overfit to the training data, so you need to find the value that strikes a balance between accuracy and properly fitting the data. Deciding on the proper setting of the `max_depth` parameter is the task of the tuning process, via either Grid Search or Randomised Search with cross-validation.
This page from the scikit-learn documentation explains the process well: [https://scikit-learn.org/stable/modules/grid_search.html](https://scikit-learn.org/stable/modules/grid_search.html)
|
116165
|
1
|
116167
| null |
0
|
1046
|
I want to select rows by the maximum values of another column which would be the duplicated rows containing duplicated maximum values of a group.
This should contain three steps:
(1) group dataframe by column A;
(2) get duplicated rows with duplicated maximum values of column B;
(3) get rows if it contains maximum values of column C (if it is still duplicated, pick the first).
Example:
```
df_test = pd.DataFrame({'A':[1,1,2,3,4,2,4,3,3,2],
'B':[3,3,2,4,5,2,5,3,4,3],
'C':[80,85,88,90,70,83,85,90,90,70]})
```
[](https://i.stack.imgur.com/V0GY7.png)
```
df_result=pd.DataFrame({'A':[1,2,3,4],
'B':[3,3,4,5],
'C':[85,70,90,85]})
```
[](https://i.stack.imgur.com/8uyHa.png)
|
select rows containing max value basing on another duplicated rows of a group
|
CC BY-SA 4.0
| null |
2022-11-14T18:03:19.583
|
2022-11-14T19:11:52.487
| null | null |
142778
|
[
"pandas",
"dataframe",
"python-3.x"
] |
This is more of a programming than a data science question, and would therefore be better suited for stackoverflow, but this can be achieved relatively easily using a combination of sorting and grouping:
```
(
df
# sort such that the first row is within each group is the one you want
.sort_values(["A", "B", "C"], ascending=[True, False, False])
# group based on column A
.groupby("A")
# select the first row within each group
.first()
# reset the index such that A is a column instead of the index
.reset_index()
)
```
Which gives the following result:
|A |B |C |
|-|-|-|
|1 |3 |85 |
|2 |3 |70 |
|3 |4 |90 |
|4 |5 |85 |
|
replace values based on Number of duplicate rows are occured
|
I would do this manually. First, let us create the index set of entries whose state must remain active. To do this, I iterate over all rows and record active instances. Note that the later occurrence overrides earlier ones, so we keep only the last one occurrence of active event.
```
last_active = dict()
for i, row in df.iterrows():
if row['Active'] == 'Active':
last_active[row['site']] = i
keep_active = last_active.values()
```
Now I assign the state 'Active' to those entries whose index is in `keep_active` and `InActive` otherwise.
```
df['refined_active'] = df.apply(lambda x: 'Active' if x.name in keep_active else 'InActive', axis=1)
```
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.