text
stringlengths 83
79.5k
|
---|
H: How to use BERT in seq2seq model?
I would like to use pretrained BERT as encoder of transformer model. The decoder has the same vocabulary as encoder and I am going to use shared embeddings. But I need <SOS>, <EOS> tokens which are not trained with BERT. How should I get them ? Can I use <CLS> token as <SOS> and <SEP> as <EOS> ? Or I have to create these two embeddings as trainable Variables and concatenate them to the decoder input / labels ?
AI: In principle, it is possible to reuse the special tokens as you describe.
However, according to research, you should not freeze BERT, but fine-tune the whole model with your data, in order to obtain better translation quality.
Another option would be to reuse just the embeddings instead of the whole model. |
H: Choosing best model produced from different algorithms. Metric produced by cross-validation on the train set or metric produced on the test set?
I know that choosing between models produced by one algorithm with different hyperparameters the metric for choosing the best one should be the cross-validation on train set.
But what about choosing between models that are produced by different algorithm? By which metric should I compare them, the metric produced by cross-validation on the train set or metric produced on the test set? Also why?
AI: When you are comparing different algorithms, you strictly use the test set to compare performance. That is, you retrain all of your candidate models using your entire train set (using cross-validation or whatever) and make predictions on the test set, from which you can assess model performance. These scores are the only unbiased measures of predictive performance that you have.
The reason why we compare scores based on the completely unseen test set, and not the training set, is to avoid optimization bias during the hyperparameter tuning step. Basically, there is a very large probability that you selected a hyperparameter combination that is overfitting the training set during your cross-validation. A relatively famous paper explores this phenomenon in detail. The result is that the scores you found when optimizing your hyperparameters are optimistically biased. |
H: Regression and Classification in one Neural network
For example consider object localization problem. Here NN will have 5 ouputs. output[0] will tell probability of object present in image, other 4 will tell bounding box coordinates.
As we see that output[0] has to use classification loss like cross entropy and output[1] to output[4] will have to use regression loss like Mean-squared-error.
So Total loss is something like this:
loss=Cross_entropy(output[0],Y[0])+MSE(output[1:5],Y[1:5]) #Y is true value
Are loss like that backprogationable in vectorised form?
Can I implement that kind of loss in tensorflow?
If yes, how does tensorflow do that? Does it perform differentiation on each element of vector or matrix instead of whole thing at once?
AI: Yes, these type of loss functions can be optimized using backpropagation, also in Tensorflow.
The value of the loss is a scalar (same as just the cross entropy, or the MSE, otherwise you wouldn't be able to add them), which means that it doesn't really work any different from just optimizing for any other scalar loss function. As long as the operations involved in calculating the loss function are differentiable ("smooth") enough, Tensorflow (or any other framework that does automatic differentation) doesn't care.
Think of it this way: calculating any interesting loss function involves summing up a bunch of terms living in a higher-dimensional vector space. In this case, for some of the directions in this vector space, you apply a different function on them than for others in that vector space. Doesn't really matter to Tensorflow.
What does matter though is how you sum them: since the cross entropy and the MSE are not working on the same type of units (think dimensional analysis), you have to determine some sort of scale between them. You can view this as a hyperparameter, where you can chose how important it is to have the classification correct vs how correct your bounding boxes are. |
H: Are Deep Neural Networks limited to grayscale images depending on whether you use Seq. or Func. API?
When I say DNN, I mean the simple usage of densely connected neurons (not CNN).
Say we are using Keras (with Tensorflow backend), the input_dim, using the Sequential API, can only take an integer or None value which restricts it to taking grayscale images right?
However since the functional API can accept dimensions that aren't just restricted to integers I would assume that it can take RGB images as opposed to only grayscale images?
If anyone can confirm or support, that would be great?
AI: The answer is no, they are not limited.
However, your statements seem to contain multiple misunderstandings, so let's first clarify them:
The sequential and functional APIs in Keras are different approaches for structuring the layers of a neural network. Both can have dense layers and convolutional layers.
Convolutional layers exploit information locality and therefore normally perform better on images, where information locality is key.
Dense layers, on the other hand, can only handle vectors as input, therefore, anything we want to feed as input to a dense layer must be a vector; we can feed image data as input to a dense layer, but we must first remove the spatial organization (i.e. flattening), that is, we must lose the information about which pixel is next to another, and which channel it is in.
With this information in mind, we can conclude that, while we can use images, either grayscale or color, as input to dense layers, we need to flatten the image as a vector, which makes dense layers in general not very appropriate to receive images as input, because the locality information is not used by them.
Update: regarding the ability for dense layers to accept inputs of more than one dimension, the multiplication takes place along the last dimension, as explained in the documentation:
Note: If the input to the layer has a rank greater than 2, then Dense computes the dot product between the inputs and the kernel along the last axis of the inputs and axis 1 of the kernel (using tf.tensordot). For example, if input has dimensions (batch_size, d0, d1), then we create a kernel with shape (d1, units), and the kernel operates along axis 2 of the input, on every sub-tensor of shape (1, 1, d1) (there are batch_size * d0 such sub-tensors). The output in this case will have shape (batch_size, d0, units).
But this is applicable to both the functional and sequential APIs. |
H: Pros and Cons of Positive Unlabeled learning?
I've been looking for papers that discuss the pros and cons of positive unlabeled learning but I haven't been able to find anything.
I'm looking to compare the general differences between creating a positive-unlabeled based problem vs a regression classification. I have a biological dataset where it's hard to definitively define a sample as negative but I can make rules that would find something as close as possible to negative - with my idea being that I can assign scores to samples (e.g. like 0.1 instead of 0 to imply the lack of certainty but the closeness of the sample to being negative). However, I am trying to get an understanding of if I should consider positive unlabelled learning (in theory I could label my positive samples and ignore everything else even if other samples are capable of having a close to negative label/score) but I'm struggling to find information on pros and cons of trying positive-unlabelled learning.
AI: I don't think it's possible to know for sure if PU learning would work in your setting or not. It's certainly relevant to cases like the one you describe, so it would be worth trying. But there are other valid options, and even within PU learning there are different approaches to choose from (you might be interested in this question).
In my opinion the alternative you propose with regression makes some sense and it might work, but it's not very "clean" in terms of design: first the choice of 0.1 is arbitrary (why not 0.2 or 0.05 or ...?). Second, it means that you're telling the regression algorithm that "this instance should have probability 0.1" for many negative instances and also for a few negative instances: this is different than saying "I don't know the target value for this instance".
Note that you could also consider one class classification in this kind of setting, (as part of PU learning or not). |
H: DIGITS Docker container not picking up GPU
I am running DIGITS Docker container but for some reason it fails to recognize host's GPU: it does not report any GPUs (where I expect 1 to be reported) so in the upper right corner of the DIGITS home page there is no indication of any GPUs and also during the training phase, DIGITS uses only CPU.
I have GeForce GT 640 graphics card:
$ nvidia-smi -L
GPU 0: GeForce GT 640 (UUID: GPU-f2583df9-404d-2564-d332-e7878a94d087)
$ lspci
...
VGA compatible controller: NVIDIA Corporation GK107 [GeForce GT 640 OEM] (rev a1)
...
GK107 is a code name for GeForce GT 640 (GDDR5) (source: https://en.wikipedia.org/wiki/GeForce_600_series) which, according to https://developer.nvidia.com/cuda-gpus, has computing capability 3.5 (which is supported as it has to be >2.1 according to https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#installing-on-ubuntu-and-debian).
This is my docker run command:
$ docker run --gpus all -d --name digits --rm -p 8888:5000 -v /home/userx/data:/data -v /home/userx/jobs:/workspace/jobs nvcr.io/nvidia/digits:20.12-tensorflow-py3
When nvidia-smi runs from Docker container, it does see the graphics card:
$ docker exec -it digits bash
root@e58b860504a9:/workspace# nvidia-smi
Fri Feb 12 23:33:17 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce GT 640 Off | 00000000:01:00.0 N/A | N/A |
| 40% 32C P8 N/A / N/A | 260MiB / 1992MiB | N/A Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
I am using the latest version of Docker and Nvidia Docker:
$ docker --version
Docker version 20.10.3, build 48d30b5
$ nvidia-docker version
NVIDIA Docker: 2.5.0
Client: Docker Engine - Community
Version: 20.10.3
API version: 1.41
Go version: go1.13.15
Git commit: 48d30b5
Built: Fri Jan 29 14:33:21 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.3
API version: 1.41 (minimum version 1.12)
Go version: go1.13.15
Git commit: 46229ca
Built: Fri Jan 29 14:31:32 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.3
GitCommit: 269548fa27e0089a8b8278fc4fc781d7f65a939b
runc:
Version: 1.0.0-rc92
GitCommit: ff819c7e9184c13b7c2607fe6c30ae19403a7aff
docker-init:
Version: 0.19.0
GitCommit: de40ad0
I am running Ubuntu 20.04:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.2 LTS
Release: 20.04
Codename: focal
I installed the most recent version of NVIDIA driver for Ubuntu:
$ modinfo nvidia
filename: /lib/modules/5.4.0-65-generic/updates/dkms/nvidia.ko
alias: char-major-195-*
version: 460.32.03
supported: external
license: NVIDIA
srcversion: 9BFA7969070552C6938D8A8
alias: pci:v000010DEd*sv*sd*bc03sc02i00*
alias: pci:v000010DEd*sv*sd*bc03sc00i00*
depends:
retpoline: Y
name: nvidia
vermagic: 5.4.0-65-generic SMP mod_unload
...
Would anyone be kind to give me a hint why DIGITS running in Docker does not recognize my graphics card?
AI: I found the answer. https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#platform-requirements specifies compute capability requirements for NVIDIA Container Toolkit but compute capability requirements for DIGITS Docker image are specified for each image release. For digits:20.12 https://docs.nvidia.com/deeplearning/digits/digits-release-notes/rel_20-12.html#rel_20-12 states the following:
Release 20.12 supports CUDA compute capability 6.0 and higher.
My GPU does not meet that requirement. |
H: ImportError: Pandas requires version '0.3.0' or newer of 's3fs'
I'm trying to read files from S3, using boto3, pandas, anaconda, but I have the following error:
ImportError: Pandas requires version '0.3.0' or newer of 's3fs'
(version '0.1.6' currently installed).
How can I update the s3fs version?
This is my code:
import boto3
import pandas as pd
s3 = boto3.resource('s3')
bucket= s3.Bucket('bucketname')
files = list(bucket.objects.all())
files
objects = bucket.objects.filter(Prefix='bucketname/')
objects = bucket.objects.filter(Prefix="Teste/")
file_list = []
for obj in objects:
df = pd.read_csv(f's3://bucketname/{obj.key}')
file_list.append(df)
final_df = pd.concat(file_list)
print (final_df.head(4))
AI: The error message is telling you exactly what is wrong here - you need to update the Python package.
Steps:
Open your terminal
Type the command:
pip install s3fs --upgrade |
H: What is the difference between Okapi bm25 and NMSLIB?
I was trying to make a search system and then I got to know about Okapi bm25 which is a ranking function like tf-idf. You can make an index of your corpus and later retrieve documents similar to your query.
I imported a python library rank_bm25 and created a search system and the results were satisfying.
Then I saw something called Non-metric space library. I understood that its a similarity search library much like kNN algorithm.
I saw an example where a guy was trying to make a smart search system using nmslib. He did the following things:-
tokenized the documents
pass the tokens into fastText model to create word vectors
then combined those word vectors with bm25 weights
then passed the combination into nmslib
performed the search.
If the above link does not opens the document just open it in incognito mode.
It was quite fast, but the results were not satisfying, I mean even if I was copy pasting any exact query from the doc, it was not returning that doc. But the search system that I made using rank_bm25 was giving great results. So the conclusion was
bm25 gave good results and nmslib gave faster results.
My questions are
How do they both (bm25, nmslib) differ?
How can I pass bm25 weights to nmslib to create a better and faster search engine?
In short, how can I combine the goodness of both bm25 and nmslib?
AI: Note that I don't know nmslib and I'm not familiar with search optimization in general. However I know Okapi BM25 weighting.
How do they both (bm25, nmslib) differ?
These are two completely different things:
Okapi BM25 is a weighting scheme which has a better theoretical basis than the well known TFIDF weighting scheme. Both methods are intended to score words according to how "important" they are in the context of a document collection, mostly by giving more weight to words which appear rarely. As a weighting scheme, Okapi BM25 only provides a representation of the documents/queries, what you do with it is up to you.
nmslib is an optimized similarity search library. I assume that it takes as input any set of vectors for the documents and query. So one could provide them with vectors made of raw frequencies, TFIDF or anything else. What it does is just computing (as fast as possible) the most similar documents to a query, using whatever representation of documents is provided.
How can I pass bm25 weights to nmslib to create a better and faster search engine?
Since you mention that the results based on BM25 are satisfying, it means that the loss of quality is due to the nmslib search optimizations. There's no magic, the only way to make things fast is to do less comparisons, and sometimes that means discarding a potentially good candidate by mistake. So the problem is not about passing the BM25 weights, it's about understanding and tuning the parameters of nmslib: there are certainly parameters which allow the user to select an appropriate trade off between speed and quality. |
H: Learning Curves and interpretations
I've trained 4 classifiers on an undersampled dataset.
I plotted the learning curve for each classifier and I got the following results :
I see that for the Log Reg, both curves seem to converge and that adding more data will not help at some point.
For the SVC I have no idea (rather than adding more data seems good ! )
for Knn : adding more data will increase both accuracy
for Random Forest : I have no idea.
I would love to understand how to read these curves. Thank you very much ! :)
AI: In general, the further away the green line is from the red line, the more the model is overfitting, however eventually enough data will cure all overfitting (there will be so much data the model can't possibly memorize all of it), and that's why the lines converge to being together (stops memorizing, red line goes down, starts generalising, green line goes up). Some models need more data to learn than others however, and so as you can see, the LogisticRegression model reaches it's best performance much faster than, for example, the SVC.
An interesting case is the KNN, who's red line doesn't go down, but rather up. I'm pretty sure the reason for this is to do with how the KNN works, it compares instances it knows to classify new instances. Thus, the KNN doesn't really memorize... New instances it can compare with will never hinder it's performance on the training set (red line). However it too will also eventually converge, the lines together. |
H: ImportError: cannot import name 'cv2' from 'cv2'
I'm using anaconda and installed OpenCV using conda-forge.
conda install -c conda-forge opencv
In my notebook I run this line of code
from cv2 import cv2
Unfortunately, get this error message:
ImportError: cannot import name 'cv2' from 'cv2' (C:\Users\...\Anaconda3\envs\...\lib\site-packages\cv2.cp38-win_amd64.pyd)
The weird thing is importing cv2 and running its functions works just fine.
# Works just fine
import cv2
img = cv2.imread('snek.jpg')
Here are some informations about my system if that helps?
conda version : 4.9.2
conda-build version : 3.20.5
python version : 3.8.5.final.0
platform : win-64
AI: If you want to use the opencv module you import this by running import cv2. The code you want to run tries to import a function/module called cv2 from the cv2 package, which does not exist. |
H: What is the best practice for tuning hyperparameters using validation data?
I'm building a binary classifier, using task-transfer from resnet and a total training set of 300 images.
Initially I put aside 100 images as validation, and tuned the hyperparameters, each time training on 200 and testing on 100, until I got a validation accuracy of 93%.
Happy with this accuracy I tried the same parameters on the test set (another 170 images) and got really bad accuracy (around 65%).
What did I do wrong?
Should I have used cross validation?
What is the best practice here? How should I got about tuning my hyperparameters?
Can I repeat my process and check on the test set again? If so, how many times can I do this before it's "cheating"
AI: If you're seeing performance that is much better on the validation than the unseen test data, then that is suggestive of some sort of overfitting or, if not, that the data do not come from the same distribution. That could mean that your test images are very different from the training and validation data, for example.
First, I'd double check the data to make sure the train, validation and test sets are definitely distinct, and that all three sets have roughly the same number of positive and negative examples. If this isn't obviously the problem, then most likely I'd guess that you are overfitting the hyperparameters to the validation dataset.
Using cross-validation would most likely help you to see whether this is the case, as you'd see more dramatic variation between each fold if the parameters were overfit towards one specific validation fold.
If you repeat the process on the test dataset, then that data is no longer "unseen" and is part of the model, even if you just use it to tune hyperparameters. You should keep the test dataset completely separate from the model building process so that when you come to measuring model performance, you get a true estimation of what would happen on unseen data. |
H: Confusion about the Bellman Equation
In some resources, the belman equation is shown as below:
$v_{\pi}(s) = \sum\limits_{a}\pi(a|s)\sum\limits_{s',r}p(s',r|s,a)\big[r+\gamma v_{\pi}(s')\big] $
The thing that I confused is that, the $\pi$ and $p$ parts at the right hand side.
Since the probability part - $p(s',r|s,a)$- means that the probability of being at next state ($s'$), and since being at next state ($s'$) has to be done via following a specific action, the $p$ part also includes the probability of taking the specific actions inside it.
But then, why the $\pi(a|s)$ is written at the beginning of the equation? Why do we need it? Isn't the possibility of taking an action stated at the $p(s',r|s,a)$ part already?
AI: $p(s', r | s, a)$ is the probability of arriving at state $s'$ and obtain reward $r$ given that the environment was in state $s$ and the agent took action $a$. Therefore, this probability is defined assuming action $a$ is taken. There is no probability of taking $a$ included there.
The probability of the agent taking an action is provided by the policy $\pi$, and that is why we need it in the equation.
You can think of the interaction of these two terms with the law of total probability: $p(A)=\sum _{n}p(A\mid B_{n})p(B_{n})$, where $p(B_{n})$ is analogous to $\pi(a|s)$ and $p(A\mid B_{n})$ is analogous to $p(s', r | s, a)$. |
H: How to use inverse_transform in MinMaxScaler for pred answer in a matrix
I am working on a data, for preding output, I used SVR by bellow code:
from sklearn.svm import SVR
regressor = SVR(kernel = 'linear')
regressor.fit(trainX,trainY)
from sklearn.metrics import r2_score
pred = regressor.predict(testX)
print(pred)
The answer is : [0.58439621 0.58439621 0.58439621 ... 0.81262134 0.81262134 0.81262134]. I'm trying to inverse the scaling to real amount.
I search it in StackOverflow and reach this: https://stackoverflow.com/questions/49330195/how-to-use-inverse-transform-in-minmaxscaler-for-a-column-in-a-matrix, I implement every 2 answers in my code, but I get error yet. Can anyone help me with this?
I write this from above source:
import sklearn
from sklearn.preprocessing import MinMaxScaler
scale=sklearn.preprocessing.MinMaxScaler()
scale.min_,scale.scale_=scaler.min_[0],scaler.scale_[0]
scale.inverse_transform(pred)
but, I got same error as:
Blockquote
Expected 2D array, got 1D array instead:
array=[0.58439621 0.58439621 0.58439621 ... 0.81262134 0.81262134 0.81262134].
Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Blockquote
AI: From my understanding you are working on a regression task in which you have applied MainMaxScaler to your target variable y prior modeling.
If so you have two options:
As the error message suggests, you can reshape the output with array.reshape(-1, 1)
Scikit learn has implemented a class to work with transformations on target:
So just try
from sklearn.svm import SVR
from sklearn.compose import TransformedTargetRegressor
from sklearn.metrics import r2_score
from sklearn.preprocessing import MinMaxScaler
regressor = SVR(kernel = 'linear')
model = TransformedTargetRegressor(regressor= regressor,
transformer = MinMaxScaler()
).fit(trainX,trainY)
pred = model.predict(testX)
print(pred)
Every time you make a model.predict(X) it will apply an inverse transformation so that your predictions are in the same scale as prior MinMaxScaler
EDIT:
Working example of transformation without using Scikit-learn
# array example is between 0 and 1
array = np.array([0.58439621, 0.81262134, 0.231262134, 0.191])
#scaled from 100 to 250
minimo = 100
maximo = 250
array * minimo + (maximo - minimo)
Returns:
array([208.439621 , 231.262134 , 173.1262134, 169.1]) |
H: Learning rate of 0 still changes weights in Keras
I just trained a model (SGD) with keras and was wondering why the change of accuracy and loss from epoch to epoch doesn't really decrease that much when I lower the learning rate. So I tested what happens when I set the learning rate to 0 and to my surprise, accuracy and loss still changed from epoch to epoch and I can't find an explanation for that. Does anyone know why this could be happening?
AI: If your learning rate is set lower, training will progress very slowly because you are making very tiny updates to the weights. However, if you set learning rate higher, it can cause undesirable divergent behavior in your loss function. So when you set learning rate lower you need to set higher number of epochs.
The reason for change when you set learning rate to 0 is beacuse of Batchnorm. If you have batchnorm in your model, remove it and try.
Look at these link, link |
H: Performing anomalie detection on a battery volatge using LSTM-RNN
I am trying to detect anomalies in a battery output voltage for one month.
I have the next data frame, as it is shown the data is collected each minute for each day so I have almost 1420 sample per day.
Should I use the 'time' or the 'date' column in my time series analysis?
I could not find an example like my situation all dataframes I found looks like the next :
It is all about a unique value for each day. Please any help is appreciated!
Thank you in advance!
AI: I will use the "date" and "time" columns to pre-process your data and to construct your neural net input.
RNN does not work well for very long-term dependancies... so, for example, creating a time series with all minutes in a month, won't probably work.
You must select:
How many samples your input data will have
What is your sampling period (minute, hour, day, week...)
What do you want to detect: If the input data contains an anomally or you might want to predict if an anomally will occur in the future...
When you have all of this, and perhaps something else, defined you will have to use your "date" and "time" columns to create the dataset (the time-series)
Furthermore, I don't know if all batteries are checked under the same circunstances, for instance, are all of them new stock ones? If not, you can also use those columns to compute battery age or "working time" as an extra feature. |
H: How to decrease $R^2$ value and change it to positive value
I'm working on a data, and use regression , as you see bellow:
from sklearn.svm import SVR
regressor = SVR(kernel = 'linear')
regressor.fit(trainX,trainY)
above answer is:
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='scale',
kernel='linear', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
from sklearn.metrics import r2_score
pred = regressor.predict(testX)
SVM_R2 = print('r2= ' +str(r2_score(testY,pred)))
import matplotlib.pyplot as plt
plt.plot(testY, 'r')
plt.plot(pred,'g' )
plt.ylabel("pred and testY")
plt.xlabel("")
plt.show()
I want implement 2 changes:
$R^2$ be positive
$R^2$ be nearer to 1.
How could I do this?
AI: Apart from the considerations about the quality of the data or whether or not the model is suitable for the problem, one good apporach is to try different combnations of the algorithm parameteres (using cross-validation) to come up with the best possible model.
I mean, you can do a grid search or a randomizded search to find out which combination of the regression algorihtm's parameters works better (for a SVR you have the $kernel$, $gamma$, $C$...). Fortunatelly, scikit learn has it already implemented:
sklearn.model_selection.RandomizedSearchCV
sklearn.model_selection.GridSearchCV
There are more available methods:
https://scikit-learn.org/stable/modules/classes.html#hyper-parameter-optimizers
The key is:
Telling the searching algorithm which are the target parameters
Teling the searching algorithm which score is used to asses which combination of parameters is the best one (it can be accuracy, mae, mse...) |
H: What is the difference between spiral, flame, aggregation data
What is the difference between spiral, flame, aggregation data? What are the names of the columns, or what are the columns indicate?
For example, spiral is like to:
31.95 7.95 3
31.15 7.3 3
30.45 6.65 3
29.7 6 3
28.9 5.55 3
28.05 5 3
27.2 4.55 3
26.35 4.15 3
25.4 3.85 3
24.6 3.6 3
23.6 3.3 3
22.75 3.15 3
21.85 3.05 3
20.9 3 3
Flame is like to:
1.85 27.8 1
1.35 26.65 1
1.4 23.25 2
0.85 23.05 2
0.5 22.35 2
0.65 21.35 2
1.1 22.05 2
1.35 22.65 2
1.95 22.8 2
Aggregation is like to:
15.55 28.65 2
14.9 27.55 2
14.45 28.35 2
14.15 28.8 2
13.75 28.05 2
13.35 28.45 2
13 2.15 2
13.45 27.5 2
13.6 26.5 2
12.8 27.35 2
I searched but couldn't find a source to illustrate the difference or which columns indicate?
AI: Generally, without knowing the source of the data, we can't tell you much about the columns. But I assume the first two columns correspond to $x$ and $y$. The third is probably some meta-data? Maybe a cluster number?
For an illustration I found this figure coming from this publication. Maybe that helps you imagine the data's shape.
https://www.researchgate.net/publication/338167806_A_Fast_Method_for_Estimating_the_Number_of_Clusters_Based_on_Score_and_the_Minimum_Distance_of_the_Center_Point |
H: Is there any different between feature selection and pca? If there is could anyone please kindly explain for me please?
First of sorry for asking a possibly beginner question, but i don't understand pca seems to be the same as feature selection, but when i search online they seems to be talked differently. What people usually say is PCA for reducing dimensionality and feature selection for selecting feature. Didn't those two do the same thing? Reducing number of attributes/features/dimension? Please kindly help me understand. Thank you
AI: This answer emphasizes an intuitive understanding since the OP is a beginner.
(1) PCA can be used for Feature Selection, in a special case, when the features are already uncorrelated and the 'relevant' features are embedded in a lower-dimensional sub-space.
(2) PCA can be used for Feature Extraction, when the features are correlated. Based on variance of the data among the transformed features, we may now choose to do Feature Selection from among the transformed features.
Figure 1 should elucidate these idea. There are 5 'original' features in the data (x1,...,x5). First compute a covariance matrix. Note that, if the original features were completely uncorrelated, then the covariance matrix would be a diagonal matrix, where the values on the principal diagonal are equal to the variance in each dimension. See Figure 2.
The next step is Eigenvector analysis of the covariance matrix. The Eigenvectors provide the transformed features. These transformed features have lower correlation (assume uncorrelated for simplicity) than the original features. We can now choose to do Feature Selection but picking the transformed features based on the variance. The Eigenvalues provide this variance. If the data happens to be embedded in a sub-space, which is the case in this hypothetical example, we simply pick the Eigenvector(s)/Transform Feature(s) with distinctly highest Eigenvalue(s)/Variance(s).
Figure 1
Figure 2
To round up your understanding of covariance, correlation, eigenvector analysis, consider the follow hypothetical example of data with 3 original features/traits A, B, and C. The example is illustrated in Figure 3.
In Figure 3:
(A) . Covariance matrix for three traits A , B , and C . The diagonal elements are the variances, and the off-diagonal elements are the covariances. (B) . Correlation matrix for the same three traits. Off-diagonal elements are the product-moment correlations among the traits. (C) . Eigenvalues for the principal components (eigenvectors) of the covariance matrix (diagonal elements). Notice that the covariances among the three principal components are all zero (off-diagonal elements) and that the sum of the eigenvalues is equal to the sum of the variances in A. (D) . Eigenvectors of the covariance matrix. These numbers can be thought of as loadings of each trait on each principal component, or as angle (in radians) that each principal component must be turned to be aligned with the original variable axes. (E) . The percentage of the sum the original variances explained by each principal component. (F) . The spatial relationships between data, trait axes, and principal components. Traits A , B , and C are correlated, so lie in a linear cloud in the 3D space formed by their trait axes. The principal components are the major axes through those data, lying at angles to the original trait axes described by the eigenvectors in D and having variances along each principal component axis described by the eigenvalues in C.
Figure 3
TL;DR
PCA is NOT feature selection. However, you can select transformed features that have been uncorrelated by PCA. |
H: Why are mBART50 language codes in an unusual format?
I am trying to use mBART for multilingual translation(about 30 languages) but I am facing an issue with using it as I am currently using langid to identify the languages then load mBART and translate all the words based on the language code that has been identified. But mBART uses this odd format for language codes for example:
en_XX -> English
hi_IN -> Hindi
ro_RO -> Romanian
Whereas Langid outputs them in this format:
af, am, an, ar, as, az, be, bg, bn, br
I cannot seem to find any documentation on how to interpret the mBART language code as even the research paper does not include it.
AI: It encodes the language and its regional variant, the same way as locales are encoded. hi_IN then means Hindi as spoken in India, en_US would mean American English, en_GB British English. My guess is that en_XX means English in general.
Anyway, the first part of the locale code is the ISO 639-1 language code which is the same as langid uses.
Btw. langid works fine for documents, but not that well for isolated sentences. For isolated sentences, pre-trained FastText classifier delivers much better results. |
H: Multiplying a dataframe by a larger one
I have two dataframes df1 and df2 with the same columns but not the same row number.
I want to multiply them element-wise such that the smallest one (df1) fits into the first corresponding rows of the largest one (df2), and gives 0 for the remaining cells. I tried df1.mul(df2) but this gave me a DataFrame full of NaNs.
Could anyone help please?
AI: If you just multiply the two dataframes, the missing rows will be filled with NaN values (missing values. You can then simply replace all these with 0.0, or any value.
Here is an example:
In [1]: import pandas as pd
In [2]: df1 = pd.DataFrame(range(6), columns=["A"])
In [3]: df2 = pd.DataFrame(range(8), columns=["A"]) # different length
In [4]: df3 = df1 * df2
In [5]: df3 # Look at the Not-a-Number values
Out[5]:
A
0 0.0
1 1.0
2 4.0
3 9.0
4 16.0
5 25.0
6 NaN
7 NaN
In [6]: df3.fillna(0.0) # fill those NaN values with zero
Out[6]:
A
0 0.0
1 1.0
2 4.0
3 9.0
4 16.0
5 25.0
6 0.0
7 0.0 |
H: Keras/Tensorflow: model.predict() returns a list. How do I match the output with my class names?
I have a CNN built in Keras. I have saved it and am now using the model.predict() function to make predictions from it. Whenever I run the following code,
def prediction(path):
import keras
from keras.preprocessing.image import load_img, img_to_array
from keras.models import load_model
import PIL
import numpy as np
img = load_img(path)
img = img.resize((224, 224))
img = img_to_array(img)
img = img.reshape( -1,224, 224,3)
model = load_model('model1.h5')
pred = model.predict(img)
return pred
print(prediction('/path/to/image/')
I get an output like this:
[[7.578206e-37 1.000000e+00 0.000000e+00 0.000000e+00]]
I am doing transfer learning using resnet50 with imagenet weights and here is the model.summary().
I have 4 classes. How do I find out where each prediction belongs?
I have looked here as well but it doesn't seem to help me.
Thanks
AI: Model prediction output is a bunch of probabilities. In order to get category name you need use following snippet. It calculates the argmax of predicions and give it to CLASSES list:
print(CLASSES[np.argmax(predictions)]) |
H: IterativeImputer Evaluation
I am having a hard time evaluating my model of imputation.
I used an iterative imputer model to fill in the missing values in all four columns.
For the model on the iterative imputer, I am using a Random forest model, here is my code for imputing:
imp_mean = IterativeImputer(estimator=RandomForestRegressor(), random_state=0)
imp_mean.fit(my_data)
my_data_filled= pd.DataFrame(imp_mean.transform(my_data))
my_data_filled.head()
My problem is how can I evaluate my model. How can I know if the filled values are right?
I used a describe function before and after filling in the missing values it gives me nearly the same mean and std. Also, the correlation between variables stayed nearly the same with slight changes.
AI: When imputing data, one is looking not to modify the actual distribution of your data. So a way to test how good your imputation was is to make a test to contrast the true distribution of every feature that has been imputed vs the true (via KS test, for example) distribution of the feature (prior imputing) if you can sate with a level. of confidence that your imputation preserved the distribution that would be a way.
Another way would be in case you have a supervised task, you can compare the performance of your model on each imputation technique. Like in the below's image from Scikit-learn documentation: |
H: Why does my manual derivative of Layer Normalization imply no gradient flow?
I recently tried computing the derivative of the layer norm function (https://arxiv.org/abs/1607.06450), an essential component of transformers, but the result suggests that no gradient flows through the operation, which can't be true.
Here's my calculations:
$\textrm{Given a vector of real numbers $X$ of length $N$, indexed as $x_i$,}\\
\textrm{we define the following operations:}\\
\mu =\frac{\sum_{k=1}^{N}{x_k}}{N}\\
\sigma = \sqrt{\frac{\sum_{k=1}^{N}{(x_k-\mu)^2}}{N}}\\
y_i=\frac{(x_i-\mu)}{\sigma}\\
\textrm{We seek to calculate the derivative of $y_i$ w.r.t $X$. That is,}\\
\frac{dy_i}{dX} = \sum^{N}_{k=1}\frac{dy_i}{dx_k}\\
\textrm{By the quotient rule:}\\
\frac{dy_i}{dx_j}=\frac{(x_i-\mu)'\sigma-(x_i-\mu)\sigma'}{\sigma^2}\\
(x_i-\mu)'=\delta_{ij}-\mu'\\
\mu'=\frac{1}{N}\\
\implies(x_i-\mu)' = \delta_{ij}-\frac{1}{N}\\
\sigma'=\frac{1}{2}(\frac{\sum_{k=1}^{N}{(x_k-\mu)^2}}{N})^{-\frac{1}{2}}*[\frac{\sum_{k=1}^{N}{(x_k-\mu)^2}}{N}]'\\
[\frac{\sum_{k=1}^{N}{(x_k-\mu)^2}}{N}]'=\frac{1}{N}\sum_{k=1}^{N}2*(x_k-\mu)(\delta_{kj}-\frac{1}{N})\\
\qquad =\frac{2}{N}\sum_{k=1}^{N}(x_k-\mu)\delta_{ij}-(x_k-\mu)\frac{1}{N}\\
\textrm{Note that $\delta_{kj}$ is only 1 when when $k=j$ and 0 otherwise, so we can further reduce:}\\
\qquad =\frac{2}{N}((x_j-\mu)-\sum_{k=1}^{N}(x_k-\mu)\frac{1}{N})\\
\qquad =\frac{2}{N}((x_j-\mu)-\frac{1}{N}\sum_{k=1}^{N}(x_k)+\frac{1}{N}\sum_{k=1}^{N}\mu)\\
\qquad =\frac{2}{N}((x_j-\mu)-\mu-\frac{1}{N}N\mu)\\
\qquad =\frac{2}{N}(x_j-\mu)\\
\textrm{Thus plugging that back into $\sigma'$ we get:}\\
\sigma'=\frac{1}{2}(\frac{\sum_{k=1}^{N}{(x_k-\mu)^2}}{N})^{-\frac{1}{2}}*\frac{2}{N}(x_j-\mu)\\
\quad=\frac{1}{N}(\frac{1}{\sigma})*(x_j-\mu)\\
\quad=\frac{(x_j-\mu)}{N\sigma}\\
\textrm{Now that we have all the components we can return to the derivative $\frac{dy_i}{dx_j}$:}\\
\frac{dy_i}{dx_j}=\frac{(x_i-\mu)'\sigma-(x_i)\sigma'}{\sigma^2}\\
\qquad=\frac{(x_i-\mu)'\sigma}{\sigma^2}-\frac{(x_i-\mu)\sigma'}{\sigma^2}\\
\qquad=\frac{\delta_{ij}-\frac{1}{N}}{\sigma}-\frac{(x_i-\mu)\frac{(x_j-\mu)}{N\sigma}}{\sigma^2}\\
\qquad=\frac{\delta_{ij}-\frac{1}{N}}{\sigma}-\frac{(x_i-\mu)(x_j-\mu)}{N\sigma^3}\\
\qquad=\frac{1}{N\sigma}(N\delta_{ij}-1-\frac{(x_i-\mu)(x_j-\mu)}{\sigma^2})\\
\qquad=\frac{1}{N\sigma}(N\delta_{ij}-1-\frac{(x_i-\mu)}{\sigma}\frac{(x_j-\mu)}{\sigma})\\
\qquad=\frac{1}{N\sigma}(N\delta_{ij}-1-y_iy_j)\\
\textrm{Finally, returning to $\frac{dy_i}{dX}$:}\\
\frac{dy_i}{dX}=\sum^{N}_{j=1}\frac{1}{N\sigma}(N\delta_{ij}-1-y_iy_j)\\
\textrm{Note that we are adding $N$ once (when $i=j$) and $(-1)$ $N$ times, so we can simplify to:}\\
\frac{dy_i}{dX}=\frac{1}{N\sigma}(N+(-1)N-\sum^{N}_{j=1}y_iy_j)\\
\quad=\frac{1}{N\sigma}(-\sum^{N}_{j=1}y_iy_j)\\
\quad=\frac{1}{N\sigma}(-y_i\sum^{N}_{j=1}y_j)\\
\quad=\frac{-y_i}{\sigma}\frac{(\sum^{N}_{j=1}y_j)}{N}\\
\quad=\frac{-y_i}{\sigma}\mu_y\\
\textrm{BUT by properties of data following a standard normal distribution $\mu_y=0$, so}\\
\frac{dy_i}{dX}=\frac{-y_i}{\sigma}0\\
\quad=0\\
\textrm{Which means no gradient flows through a layer normalization}\\\\$
I'm almost certain I've simply made a mistake somewhere, so if someone could point it out I'd greatly appreciate it. Thanks!
AI: If you're computing the deriviative of the layer norm for the purpose of using it in backprop, then you need to compute the derivative with respect to the parameters of the layer, not it's inputs.
i.e., fix $x$ and compute $\frac{\partial y }{\partial \mu }$ and $\frac{\partial y }{\partial \sigma }$
Regarding your derivation specifically
I did not look at it closely, partly because I suspect that its not actually the derivative you care about as I said above. But it looks like you're over-complicating things in the sense that you should treat $\mu$ and $\sigma$ as fixed variables and not as functions of $x$.
If they're functions of $x$, it may be that you're running into something like this:
You define $y = \frac {x - f(x)}{g(x)}$ and you compute $\frac{\partial y }{\partial x }$
... quotient rule ... computation... re-arrange...whatever
then you plug $f(x) = x$ in to the result and correctly observe that $\frac{\partial y }{\partial x } = 0$. Which is kind of obvious if you plug in at the start, but has been obscured because of notation. Now, where you started is not as simple as what I've written, but this same thing could happen. |
H: Is it good practice to transform some variables and not others?
I have a dataset with categorical variables encoded into numeric values, other variables that are continuous and have many outliers, and other continuous variable with a fairly normal distribution.
I was planning to use the sklearn preprocessing method .PowerTransformer in order to transform all of them, but maybe it might make more sense to just use it for those columns that have not normal distribution at all and many outliers?
It's for a classification problem (the Titanic machine learning one).
AI: About the question whether to scale only a subset of features, I would tell you to do it over all the features (at least the continuous numeric ones) since the goal of data-scaling is to put these data on the same "reference scale" to be fairly compared.
Nevertheless, having mixed data types (continuous numerical, categorical...) for your classification problem looks more appropriate for scale-invariant algorithms like the ones based on decision trees. More precisely, you can have a look at XGBoost, where the author explains in this link that you do not actually have to re-scale your data.
Actually, in a recent real use case at my company, we tried re-scaling data VS not re-scaling it applying XGB, and we had better results with the second option. |
H: XLNET how to deal with text with more than 512 tokens?
From what I searched online, XLNET model is pre-trained with 512 tokens, and https://github.com/zihangdai/xlnet/issues/80 , I didn't find too much useful information on that either.
How does XLnet outperform BERT on long text when the max_sequence_length hyperparameter is less than 1024 tokens ?
AI: BERT also has the same limit of 512 tokens.
Normally, for longer sequences, you just truncate to 512 tokens.
The limit is derived from the positional embeddings in the Transformer architecture, for which a maximum length needs to be imposed. The magnitude of such a size is related to the amount of memory needed to handle texts: attention layers scale quadratically with the sequence length, which poses a problem with long texts.
There are new architectures that specifically deal with longer sequences, like Longformer, Reformer, Big Bird and Linformer. |
H: What is different between R2 and mean of R2 in multiclassification probelm? Which one is correct?
I have a question. I have a big dataset (unfortunately confidential).
What I did?
I have trained my model with Naive-Bayes.
BRNBReg=BernoulliNB(alpha=0.01, binarize=0.0, fit_prior=True, class_prior=None)
BRNBReg.fit(x_train,y_train)
#CrossValidation
cv_BRNBReg_score=cross_val_score(BRNBReg,x_train,y_train,cv=9)
cv_BRNBReg_pred=cross_val_predict(BRNBReg,x_train,y_train,cv=9)
print("R2-Socre-Mean: ",cv_BRNBReg_score.mean())
print("Score: ",r2_score(y_train,cv_BRNBReg_pred))
## Prediction
BRNBReg_pred=BRNBReg.predict(x_test)
print("Score: ",r2_score(y_test,BRNBReg_pred))
My Score with CV is:
R2-Socre-Mean: 0.908198797087686
Score: 0.8673184920974637
My problem:
My Score with test-data is:
Score: 0.6244917483855538
the difference is huge! But I don't know why?
Is my model overfit\underfit?
Shall i tune my hyperparams?
AI: First of all, if you are in a classification problem as you said, R2 score is not good, it should be used for regression problem. For classification problem you have to use something like accuracy, precision, recall and F1 score.
First question
Anyway, I will answer your first question about R2score:
R2 mean: you are calculating the R2 for each split you have in your CV(train the model with training data, and evaluating it in test data). And then, you calculate the mean of each value of the split. In your case, you will have 9 values of R2score since you have 9 CV. And the result is the meaning of these values.
r2_score: you are evaluating the y_true and y_pred. You already have the y_true. To obtain the y_pred you used cross_val_predict. What you do, is calculate the predicted value using only the CV where you have the y_true in you test split.
I would recommend you use R2 mean, it's more robust score in my opinion
I recommend read this post to understand it in more detail.
Second questions
First of all, take care of what I mention: r2 is not a good score for classification problem. But let's suppose that you use another score and you have similar score outputs.
Is my model overfit\underfit?
Yes, it is. Basically, if your training accuracy is much bigger than your test data, you are overfitting. Your scores must be quite similar. What you have now, is a model that is "memorizing" and not learning. Be careful.
Shall I tune my hyperparams?
Yes, you have to try tunning your hyperparameters in order to achieve similar score between training data and test data. You should check how the model works changing hyperparameters and decided which helps you to avoid overfitting. You can do it manually or using GridSearchCV for example. |
H: Neural network type question
This web link is to a site that talks about forecasting building electricity, like a time series regression concept.
In the article they talk about the NN architecture as:
the architecture of this neural network can be written as 120:7:24
Is an MLP type NN? What I also dont understand is if they account for times series methods to forecast/predict. For example I thought for time series applications a sliding window concept needs to be used Vs a typical regression problem that does not have any element of time. Any tips greatly appreciated!
AI: MLP typically refers to a type of neural network called a 'Multi-Layer Perceptron'. As you can read on the Wikipedia page, these neural networks consist of neurons organized in layers. Each neuron has a (typically fixed) function that transforms its weighted input to produce the output for that particular neuron known as its activation function.
A 'network architecture' refers to the way the neurons are laid out in layers in the network, sometimes including the activation function. In the given example, there are 120 neurons in the first input layer. These are connected to 7 neurons in the next layer, which in turn are connected to 24 neurons in the last output layer.
The 120 input neurons in the linked article appear to include both history, i.e. a sliding_window as well as some known future information (temperature, from weather forecasts it seems). |
H: How to use GridSearch for LinearSVC / Random Forest with time series data
I have a question related on how to use the GridSearch to find the best models for my problem with time series data.
Every 3 rows is 1 one row in the original dataset. To make my time series problem a supervised one, I parsed like the one below. This was resolved from one of my previous question.
id
Age
gender
m1
m2
m3
Label
1
20
M
12.4
34
12
0
2
20
M
13
324
34
0
3
20
M
34
232
12
0
4
45
F
1.3
32
19
1
5
45
F
14
132
19
1
6
45
f
94
232
19
1
My question is: How can I use GridSearch for example to find my best machine learning model configuration using time series data? As far as I understand, using cross validation wouldn't work in this case because of the time series nature of the dataset.
I'm not sure how to proceed with this.
AI: So it´s a classification problem with a grid-search, without cross-validation. Yes, don´t use cv in time series data. There is an option, in which you can use cv, when you slowly start with less data and put more and more data during the process. But it´s complex.
For the grid-search are 2 opportunities. Either use GridSearchCV and define cv as none, or you use ParameterGrid().
For my interest I used this method:
https://stackoverflow.com/questions/44636370/scikit-learn-gridsearchcv-without-cross-validation-unsupervised-learning/44682305#44682305
in which is GridSearchCV defined as none.
import pandas as pd
test = pd.DataFrame({"id":[1,2,3,4,5,6,7,8,9], "age":[20,30,32,40,55,32,20,41,38], "gender":[0,1,0,1,0,0,1,1,0],
"m1":[12.4, 30,9.4,14,19,20,34,31,16], 'm2':[34,36,22,16,22,27,42,65,13], 'label':[0,0,1,1,0,1,1,1,0]})
test.head()
from sklearn.model_selection import train_test_split
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import GridSearchCV
from sklearn.metrics import accuracy_score
X = test.drop('label', axis=1)
y = test.label
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
rf_params = {'n_estimators': [100, 200],
'max_features': ['auto', 'sqrt'],
'max_depth': [10, 50],
'min_samples_split': [2, 20]}
cv=[(slice(None), slice(None))]
rf_clf = GridSearchCV(RandomForestClassifier(random_state=42),rf_params, n_jobs=-1, verbose=2, cv=cv)
rf_clf.fit(X_train, y_train)
#best parameters of model
print(rf_clf.best_params_)
#make predictions
rf_pred = rf_clf.predict(X_test)
print('Accuracy', accuracy_score(rf_pred, y_test))
The GridSearchCV part shows me:
GridSearchCV(cv=[(slice(None, None, None), slice(None, None, None))],
estimator=RandomForestClassifier(random_state=42), n_jobs=-1,
param_grid={'max_depth': [10, 50],
'max_features': ['auto', 'sqrt'],
'min_samples_split': [2, 20],
'n_estimators': [100, 200]},
verbose=2)
So this method works.
Here I used random forest, because in my own experience, random forest is in most cases very good. In big datasets, the SVC takes too much time.
PS: Before I forget, I changed the gender into numbers. You can use one-hot encoding for that or catboost, which can do this automatically. But with catboost you get different results in comparison with rf or other algorithms. So I prefer to transfer gender into numbers. |
H: Conv1D layer input and output
Consider the following code for Conv1D layer
# The inputs are 128-length vectors with 10 timesteps, and the batch size
# is 4.
input_shape = (4, 10, 128)
x = tf.random.normal(input_shape)
y = tf.keras.layers.Conv1D(32, 3, activation='relu',input_shape=input_shape[1:])(x)
print(y.shape)
(4, 8, 32)
It has been given that there are 10 vectors, with each of length 128. Then how does the output will be of shape (8, 32)?
If we apply a filter of size 3, we will then get a vector of length 126, if stride is 1. But, I cannot see 126 anywhere in the output.
How to understand the shapes of input and output?
AI: As described on the linked page, 128 is the dimensionality of each vector (i.e. the number of input channels), and 10 is the number of timesteps.
8 is the resulting number of timesteps after applying the filter of size 3 to the initial 10 timesteps.
In order to make it clearer, let's visualize the 1D convolution (source):
The missing parts in the picture are the batch size (4) and the output time length (8). |
H: How can I preprocess text to feed into a SVM?
I am using an IMDB dataset which contains reviews of the movies in the column text and the rating 0 or 1 in the column label. I am preprocessing the text using Tfidf using sklearn.
The code for the above statement
from sklearn.feature_extraction.text import TfidfVectorizer
vectorizer=TfidfVectorizer()
X = vectorizer.fit_transform(df_train['text'])
Y = vectorizer.transform(df_test['text'])
When I am trying to use the data for an SVM, using a linear kernel and then fitting it into the model using
classifier_linear = svm.SVC(kernel='linear')
classifier_linear.fit(X, df_test['label'])
I am getting the error
ValueError: Found input variables with inconsistent numbers of samples: [40000, 5000]
df_train is of the shape (40000,2)
df_test is of the shape (5000,2)
I am able to overcome this problem by using only 5000 values of df_train using
df_train.loc[:4999,'text']
but this defeats the purpose of having a bigger training dataset.
My question is how can I use the training dataset that will retain it's number of values?
AI: X needs to be the features for your Model and Y needs to be a target variable.
As you mentioned, you are using a IMDB dateset so, all the features which you want your model to use will be stored in X variable whereas the 'LABEL' columns will be stored in the Y variable.
Instead try this code:
X = vectorizer.fit_transform(df_train['text'])
Y = vectorizer.transform(df_train['label'])
classifier_linear = svm.SVC(kernel='linear')
classifier_linear.fit(X, Y)
Another thing you did wrong here was that,
you passed the target variable of you testing data.
You train your model from your training data and if you are confident enough on your model then you test it using the testing data.
Concepts which can help you: Cross-Validation, train_test_split. |
H: Statsmodel logit with sample weights
Using sklearn I can consider sample weights in my model, like this:
from sklearn.linear_model import LogisticRegression
logreg = LogisticRegression(solver='liblinear')
logreg.fit(X_train, y_train, sample_weight=w_train)
Is there some clever way to consider sample weights also in the Logit method of statsmodel.api?
import statsmodels.api as sm
logit = sm.Logit(y, X)
AI: It seems that there is a way of using sample weights which requires a little more work than just using a single argument, see this stackoverflow answer. |
H: cost-complexity-pruning-path with pipeline
I'm using Kaggle's titanic set. I'm using pieplines and I'm trying to prune my decision tree and for that I want the cost_complexity_pruning_path. The last line of code produces the error:
ValueError: could not convert string to float: 'male'
Do you know what I'm doing wrong? I have looked at Sklearn: applying cost complexity pruning along with pipeline but that doesn't seem to be helping in my case
cat_vars = ['Sex','Embarked']
num_vars = ['Age']
num_pipe = Pipeline([('imputer', SimpleImputer(strategy='mean')),('std_scaler', StandardScaler())])
cat_pipe = Pipeline([('imputer', SimpleImputer(strategy='most_frequent')),('ohe', OneHotEncoder())])
col_trans = ColumnTransformer([('numerical', num_pipe, num_vars),('categorical', cat_pipe, cat_vars)] ,remainder='passthrough')
final_pipe = Pipeline([('column_trans', col_trans), ('tree', DecisionTreeClassifier(random_state=42))])
final_pipe.fit(X_train, y_train)
path = final_pipe.steps[1][1].cost_complexity_pruning_path(X_train, y_train)
AI: Because cost_complexity_pruning_path refits the tree model on the data you provide before doing the pruning (source), you need to preprocess the data first. So this should do it:
X_preproc = final_pipe[:-1].transform(X_train)
path = final_pipe.steps[-1][1].cost_complexity_pruning_path(X_preproc, y_train) |
H: How match output (pred value) to input value
I'm working with data(with 4 columns which are p(product), M(name of the store)), I want predict the demand of store for that I sued SVR on the data by theses formulation:
dfn = pd.get_dummies(df)
x = dfn.drop(["demand"],axis=1)
y = dfn.demand
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler(feature_range=(0,1))
dfn = scaler.fit_transform(dfn)
.
.
.
from sklearn.metrics import r2_score
pred = regressor.predict(testX)
SVM_R2 = print('r2= ' +str(r2_score(testY,pred)))
print(pred)
# array example is between 0 and 1
array = np.array(pred)
#scaled from 200 to 800
minimo = 200
maximo = 800
output=array * minimo + (maximo - minimo)
print(output)
df2=pd.DataFrame(output)
df2.to_excel(r'/content/Book1.xlsx', index = False)
and now I get the output of this prediction. My question is, how can I match these outputs to inputs, or how can I found which demands are related to each market?
AI: I used this code and get output from this(with the help of Oxbowerce).
df2=pd.DataFrame(testX,columns=['p','M','Date'])
df3=pd.DataFrame(pred,columns=['pred'])
df4=pd.concat([df2,df3],axis=1)
df4.to_excel(r'/content/Book1.xlsx', index = False)
I saved the output in an excel file. You can see in the below picture. |
H: Normalized 2D tensor values are not in range 0-1
Below function takes in 2D tensor and normalizes it using broadcasting .The issue is except all values to be in range 0-1 but the result has values outside this range . How to get all values in 2D tensor in range 0-1
def torch_normalize(tensor_list):
means = tensor_list.mean(dim=1, keepdim=True)
stds = tensor_list.std(dim=1, keepdim=True)
normalized_data = (tensor_list - means) / stds
return normalized_data
INPUT
tensor_list=tensor([[-5.6839, -7.5829, -7.2277, -6.5066, -8.4702, -7.9844, -5.6841, 1.8570,
1.6170, -3.7592, -4.4140, -0.4981, 0.2501, 5.8463, 1.8897, -1.3968,
-5.5402, -2.4561, -5.6819]])
Normalized result
tensor([[-0.5981, -1.0615, -0.9748, -0.7988, -1.2780, -1.1594, -0.5981, 1.2420,
1.1835, -0.1284, -0.2882, 0.6673, 0.8499, 2.2155, 1.2500, 0.4480,
-0.5630, 0.1896, -0.5976]])
AI: The normalisation you do does not re-scale to $[0,1]$ range! It normalises to have mean $0$ and std $1$ instead.
To scale the tensor to be in $[0,1]$ range you should subtract min value and divide by absolute max-min value. |
H: Classification Threshold Tuning with GridSearchCV
In Scikit-learn, GridSearchCV can be used to validate a model against a grid of parameters. A short example for grid-search cv against some of DecisionTreeClassifier parameters is given as follows:
model = DecisionTreeClassifier()
params = [{'criterion':["gini","entropy"],"max_depth":[1,2,3,4,5,6,7,8,9,10],"class_weight":["balanced"]}]
GSCV = GridSearchCV(model,params,scoring="f1_micro")
GSCV.fit(X_train,y_train)
GSCV.best_params_
Now, I am only concerned with binary classification. It is the case for many algorithms that they compute a probability score, and set the decision threshold at 0.5. My question is the following: If I want to consider the decision threshold as another parameter of the grid search (along with the existing parameters), is there a standard way to do this with GridSearchCV? For instance, something like the last parameter "decision_threshold" in the following tunegrid would be ideal:
params = [{'criterion':["gini","entropy"],"max_depth":[1,2,3,4,5,6,7,8,9,10],"class_weight":["balanced"], "decision_threshold": [0.1,0.2,...,0.9]}]
Needless to say, I am not interested in a particular solution working only for DecisionTreeClassifier. Instead, a general solution for any classifier that uses a probability decision threshold that can be tuned. Preferably, I would like to keep my current GridSearchCV if possible.
AI: As far as I know, you cannot add the model's threshold as a hyperparameter but to find the optimal threshold you can do as follows:
make a the standard GridSearchCV but use the roc_auc as metric as per step 2
model = DecisionTreeClassifier()
params = [{'criterion':["gini","entropy"],"max_depth":[1,2,3,4,5,6,7,8,9,10],"class_weight":["balanced"]}]
GSCV = GridSearchCV(model,params,scoring="roc_auc")
GSCV.fit(X_train,y_train)
GSCV.best_params_
best_model = GSCV.best_estimator_
Once you have the best hyper parameters set you can obtain the threshold that maximizes the roc curve as follows:
from sklearn.metrics import roc_curve
preds = best_model.predict_proba(X_train)[:,1]
fpr, tpr, thresholds = roc_curve(y_train, preds)
optimal_idx = np.argmax(tpr - fpr)
optimal_threshold = thresholds[optimal_idx]
This threshold will give you the lowest false positive rate and the highest true positive rate |
H: Intuitive explanation of Adversarial machine learning
How would you explain Adversarial machine learning in simple layman terms for a non-STEM person? What are the main ideas behind Adversarial machine learning?
AI: Consider a game being played between two people, for simplicity, we'll assume this game is distinguishing a true picture of a panda vs. a fake picture of a panda. The first player will take the painting and show it to the second player, if the second player guesses whether it is a fake correctly, they receive a reward, if not they do not. Both players are playing the game with the goal of maximizing their reward.
To go a little deeper but make it slightly more relevant to the context of Adversarial ML. We can further assume that both players start from 0 knowledge of pandas. You might imagine that player 1 just throws random colors at a canvas and tries to convince player 2, and player 2 just randomly guesses, slowly building their intuition for what a Panda is.
After several hours/days/years, we might find that both players are extremely skilled and drawing pandas and identifying fake pandas respectively.
This is really what adversarial ML is about in a nutshell. The goal is to have two agents with competing rewards, where their optimal solution is at some form of a mixed strategy Nash equilibrium. |
H: Metric for label imbalance
I'm looking for a metric that can be used to quantify how imbalanced the labels are in a dataset.
I'm not looking for a strategy to solve the imbalance problem, I just want to present how imbalanced my dataset is. I've computed the ratio of the most frequent and least frequent labels which is probably an ok way of doing it but I'm sure there's a more robust way?
AI: You are looking for Entropy. The higher the entropy, the more imbalanced it is. You can use this function for calculating it. |
H: LinearRegression with fixed slope parameter
I have some data $(x_{1},y_{1}), (x_{2},y_{2}), ..., (x_{n},y_{n})$, where both $x$ and $y$ represent real numbers (float). I want use Scikit-learns LinearRegression model to fit a model of the form:
$y_{i} = b_{0} + b_{1}x_i + e_{i} $
Typically, I know that OLS is used to compute the parameters $b_{0}, b_{1}$. However, in my case, I happen to know that $b_{1}=c$ so I only want to fit $b_{0}$. Is there a way to force scikit-learn to use $b_{1}=c$ as the slope ratio and only estimate the intecept $b_0$, or is a custom class necessary?
AI: You can just compute: $\hat{b}_0 = \operatorname{mean}(y-cx)$ |
H: Interpretation of Autocorrelation plot
I am trying to understand better how to read the autocorrelation plot here for a timeseries data.
I ran the following code and got the output as a chart show below.
from pandas.plotting import autocorrelation_plot
autocorrelation_plot(df("y"))
Here y is the dependent variable
Should I derive the following conclusions
There are no significant autocorrelations.
The data is random.
Most of the correlations (except for 2 lags) fall within 95% confidence limits
This timeseries is not worth forecasting
Please help me if my understanding is right ?
AI: To address your points:
There are no significant autocorrelations
The correlation is low (~0.25), but there are significant autocorrelations.
The data is random & most of the correlations (except for 2 lags) fall within 95% confidence limits
The confidence intervals are used to show which autocorrelations are significant. As you rightly observed, a couple peaks jump out of this region and this tells us that these few correlations are statistically significant, the rest is random. This post may be helpful here.
This timeseries is not worth forecasting
As per the previous point, there are a couple of statistically significant weak correlations in this dataset. But they are not strong, so a periodicity based forecasting model probably wouldn't be very accurate. |
H: Multiple Regression, Classification and Boundary Poins
I have two gangs which are doing crimes. And i want to classify them.
Lets say I'm looking for a regression function:
M(x1, x2) = w1x1 + w2x2 + w3
Now I have found all three parameters w1, w2, w3.
Now I want to do classification. I get some boundary points which look like a line and they separate two classes from each other. Should I do another regression over that boundary points so that i have a exact line for my separation?
Because lets say i want Point(5,3). I want to know if its more likely that the crime is done from Gang A or B. But I have just some boundary points to separate. Should I use them for a regression?
AI: I think you want a clustering algorithm rather than regression. You will have a decision boundary between clusters of data points which will determine whether that particular point e.g. 5,3 belongs to group (cluster) A or group (cluster) B
Fit your clustering model in x1 vs x2 feature space. Take the image below, we have x1 and x2 as the x&y axis. And then the black lines are the decision boundaries, essentially the model.
You can of course just cluster between 2 groups also, as in your case.
You can checkout different clustering algorithms here |
H: What is the opposite of baseline?
I have created a prediction model and on the one hand I have to compare it with other baseline models, and on the other hand, I have to compare it with the ideal approach (supported by additional data), so I would like to know how I can call it (antonym of baseline) in the research paper.
AI: The term is Oracle.
Some references:
SO question describing the term
Scientific articles related to machine learning using the term |
H: What is the advantage of a tensorflow.data.Dataset over a tensorflow.Tensor?
I have my own input data class. It has x and y as well as test and train values (1 Tensor for each combination). I noticed there is a Dataset class built in to TensorFlow. What is the advantage of this class over a regular Tensor? Is it mainly around handling large datasets / laziness? It doesn't appear to have features tailored for x vs y data, or test vs train. All my data fits into memory so I am not clear it would be beneficial to use the built in class over my current one. Of course, the first assumption is it would be foolish not to use the built in class.
AI: The main advantage is in domains where you can't fit all of your data into memory.
However, I've seen improvements in performance even in cases where I have all my data into memory. I think two reasons contribute to this:
One is caching, where some operations (e.g. a mapping OP) will be cached and performed only in the first epoch. This, obviously is applicable if you have such a function.
Another one is prefetching. While a the model is being trained on a batch in the GPU, the CPU loads and prepares the next batch. This can help save a lot of time.
Some other capabilities are allowing for the vectorization of user defined functions (e.g. for data augmentation) and their parallelization.
You can take a look at some benchmarks here. They are a bit unrelated, as they refer to cases where the dataset isn't all loaded into memory, but they might interest you nevertheless. |
H: Validation loss diverging away from the training loss
I used the XLNET for a sentiment classifier in determining whether a comment is positive or negative. I was able to get good results
But when I plotted the validation and training losses I saw this
I think this means that the model is overfitting? But I am not exactly sure. If there is any suggestions I would really appreciate it.
AI: This kind of overfitting is typical when finetuning large LMs.
The usual approaches to "avoid" it are:
Early stopping: select the checkpoint with the best validation loss.
Random restarts: train multiple times from scratch, and select the model with the best validation performance. |
H: How to specify output_shape parameter in Lambda layer in Keras
I don't understand how to specify the output_shape parameter in the Lambda layer in Keras/Tensorflow. The documentation says:
output_shape: Expected output shape from function. This argument can
be
inferred if not explicitly provided. Can be a tuple or function. If a
tuple, it only specifies the first dimension onward;
sample dimension is assumed either the same as the input: output_shape = (input_shape[0], ) + output_shape or, the input is None and
the sample dimension is also None: output_shape = (None, ) + output_shape
If we use a tuple how should I interpret these two expressions?
output_shape = (input_shape[0], ) + output_shape
and
output_shape = (None, ) + output_shape
AI: Let's say you pass in output_shape as a tuple (50, 50, 10) where we can call the values (height, width, channels)` to the lambda layer:
your_layer = tf.keras.layers.Lambda(lambda x: x, output_shape=(50, 50, 3))
The part of the documentation:
If a tuple, it only specifies the first dimension onward;
means that the batch dimensions itself is simple carried forward, unchanged.
If you have e.g. batch_size=3 during training, the incoming tensor to your_layer might be (3, n, p, q), where n p and q could be anything, but the layer is expected to produce a shape (3, 50, 50, 10). So the 0 dimension remains unchanged, and we have concatenated it with your output_shape:
(3,) + (50, 50, 10) -> (3, 50, 50, 10)
This corresponds to the expression: output_shape = (input_shape[0], ) + output_shape, so we see that input_shape is the true shape of the incoming batch tensor during training, as we only took the batch dimension to produce the layer's outgoing batch tensor.
For the second expression it is really just the same thing, but if you haven't provided a batch shape, Tensorflow & Keras represent that as something that could be anything, and store it as None. So in that case you get:
(None,) + (50, 50, 10) -> (None, 50, 50, 10) |
H: is this problem a multiclass case?
I'm trying to classify my textile design patterns
(let's just think of it as medieval painting)
what I understand of "multilabel classification" is like this:
it outputs multiple possible result out of all those classes (let's say classes are of some artists, style and technique)
so one example could be
possible classes: leonardo, artist1, artist2, baroque, renaissance, whatever, oil, dessin, watercolor
prediction of img1: leonardo da vinci, artist1, renaissance, oilpainting, wtaercolor
but what I want to do is more like:
- possible classes:
artist
style
technique
leonardo
baroque
oil
artist1
renaissance
dessin
artist2
whatever
watercolor
prediction: {artist: ['leonardo', 'artist1'], (maybe they drew it together)
style: ['renaissance'],
technique: ['oil', 'watercolor']}
classes are more strictly categorized and but also there could be multiple results from one category of class.
I'm not even sure what should it be called and having hard time to find articels for it.
Can someone please suggest?
AI: It is not a multiclass problem.
It is a multilabel problem. Since, you have the clusters of classes you want to get. You just let the network predict multiple classes and segregate them afterwards. In this case, you will have single classification head.
Other way to do it, is to separately derive multiple classes of article, technique and style. In this case, you will have three classification heads. |
H: How does the equation "dW = - (2 * (X^T ).dot(Y - Y_hat)) / m" comes in Linear Regression (using Matrix + Gradient Descent)?
I was trying to code the Linear Regression in Python using Matrix Multiplication method using Gradient Descent and followed a code where there was no mention what is the loss but just a code as Per Iteration:
y_hat = X.dot(W) + b
dW = - (2 * (X^T ).dot(Y - Y_hat)) / m # how does the minus and matrix multiplications are used instead of Summation?
db = - (2 * np.sum(Y - Y_hat)) / m # np is numpy
W = W - lr * dW # update weights
b = b - lr * db
I know from the code is that dW is derivative of the weight matrix per iteration, X^T is the Transpose of the X features, Y is original values of Y , Y_hat are predicted values using the formula X.dot(W)+b.
What I want to know is that dW = - (2 * (X^T ).dot(Y - Y_hat)) / m. Even with the MSE loss, as given in this link in the equation 1.4, it should be something else.
Can someone please elaborate how are the values of dW,db are calculated here?
Whole Python Code is giveb here for Linear Regression
AI: dW and db are simply the derivative of the loss function with regards to the weights and biases. Given the loss function
$J = \frac{1}{m} \Sigma_{i=1}^{m}(y_i - h(x_i))^2$
the derivatives of the loss to the weights (dW) and bias are equal to
$\frac{\partial}{\partial W} J = -\frac{2}{m} \Sigma_{i=1}^{m}(y_i - h(x_i)) * x_i$
$\frac{\partial}{\partial b} J = -\frac{2}{m} \Sigma_{i=1}^{m}(y_i - h(x_i))$
As you can see, these equations align with the code you provided. Equation 1.4 from the link you provided is for $\theta_0$, i.e. the bias, which is the same as the code for db (with the only difference being the minus sign, causing by the fact that $y_i$ and $h(x_i)$ are swapped in the loss function between the first and second article). |
H: How to interpret training and testing accuracy which are almost the same?
Note - I have read this post but still don't understand
I have a Naive Bayes classifier, when I input my training data to test the accuracy, I get 63.05%. When I input my test data, the accuracy is 65.00%.
Why are the training and test accuracy almost identical? For information, my data is split in 70/30. Does this mean that there is no overfitting?
AI: Why are the training and test accuracy almost identical?
Nearly identical performance on the training set and test set is a good outcome, it means the model is doing what it's supposed to do. To give an intuitive comparison:
The performance on the training set is equivalent to how well a student can redo the exercises which have been solved by the teacher during class. The student might just have memorized the answers by heart, so it's not a proof that they understand.
The performance on the test is equivalent to how well the student can solve some similar exercises that they haven't seen before in a test. This is a much better indication that the student truly understands the topic.
Does this mean that there is no overfitting?
Yes, it proves that there's no overfitting. To keep with my comparison, overfitting is equivalent to memorizing the answers.
However there can be other problems which bias the result:
The performance on the test set is 2 points higher than the performance on the test set. This probably means that the test is very small, because if it was a large enough sample the performance wouldn't be higher. If the test set is too small, the performance is less reliable (any statistics obtained on a small sample is less reliable).
Accuracy can be a misleading evaluation measure. It only counts the proportion of correct predictions, so if a large proportion of instances belong to the same class then the classifier can just predict any instance as this class and obtain high accuracy. For example here if the majority class is around 63-65%, then it's possible that the classifier didn't learn anything at all. Looking at precision/recall/F1-score gives a more accurate picture of what happens.
[edit] Important note: as Nikos explained in a comment below, my answer assumes that you have a proper test set, i.e. that the train and test sets are sufficiently distinct from each other (otherwise there could be data leakage and the test set performance would be meaningless). |
H: Heat map and correlation among variables
I would have a question on heat map and correlation among variables.
I created this heat map, looking at possible correlation among variables and target. I got very small values.
I wanted to set a small threshold, e.g., 0.05, for selecting features.
Do you think it makes sense, or should I exclude all of them?
AI: From the info you provide, it seems you are carrying feature selection based on the correlation between your predictor variables and the target.
This is correct as a type of feature selection (see here) in the family of univariate filter selection, although not the only one. It is fast and intuitive, although you can have a look at other methods. You might also be interested in:
variance threshold selection (also per input feature, univariate filter method): it assumes that higher variance in a feature values could mean more prediction power
sequential backward selection (look here): it means more performance cost, but features are judged in subsets (not independently as above) and is ok if you don't have many features (as it seems to be)
There are many other strategies for feature selection (you might want to check for this source) |
H: Looking for binary class datasets with high class imbalance, that also have intra-class imbalance in the minority class
Newbie question alert...
For a college project I want to compare a few variants of SMOTE in terms of how much they improve classification of the minority class, over using random oversampling.
I have a specific interest in the idea that the minority class may contain small disjuncts that may themselves exhibit imbalance within the class.
I am already looking at the credit card fraud dataset on Kaggle (https://www.kaggle.com/mlg-ulb/creditcardfraud)
Can anyone please point me towards other datasets that have the following kinds of properties:
a reasonably large number of examples (ideally at least a few thousand)
have only two class labels
are highly imbalanced, i.e. the minority class is severely under-represented
ideally the minority examples would have some intra-class imbalance too
Or even better, is there any kind of good search tool out there for finding datasets based on these kinds of characteristics?
AI: The imblearn.datasets package (documentation is here) has a function called fetch_datasets() which is described as:
fetch_datasets allows to fetch 27 datasets which are imbalanced and binarized
I do not know them in sufficient depth to know whether they meet all four of the criteria you've listed, but these may be a good first place to start with. |
H: mean and variance of a dataset
I have a simple question. Please see the below screenshot :
It is from a midterm exam from a university : https://cedar.buffalo.edu/~srihari/CSE555/exams/midterm-solution-2006.pdf
My questions is how the means are postive ? I am asking because the class samples are all negative so I would expect that the mean is also negative ?
AI: It is a typo in the solution.
The author corrects it by correctly computing the Bayes discriminant in the next step. |
H: Quick question on basic Basic concept of experience replay
Due to my admitted newbie's understanding on the field, I'm about to ask a dummy question.
While sampling batches, for example experience replay buffer which contains number of samples, after getting n (size of a batch) of loss values through forward propagation, what way will the weights updates follow:
Update all weights based on all losses separately, like iterating through that n losses to update
Update all weights based on the mean (or maybe standard deviation) of all losses
AI: The 2nd one is the correct one. We update the weights according to the mean squared error between real value (in case of DQN it is a bootstrapped value) and the current value prediction, obtained for every sample ($s,a,R_t$) in the minibatch. In other words your network is going to get "corrected", for every prediction it has made, by the difference between estimation and ground truth values. |
H: Difference between zero-padding and character-padding in Recurrent Neural Networks
For RNN's to work efficiently we vectorize the problem which results in an input matrix of shape
(m, max_seq_len)
where m is the number of examples, e.g. sentences, and max_seq_len is the maximum length that a sentence can have. Some examples have a smaller lengths than this max_seq_len. A solution is to pad these sentences.
One method to pad the sentences is called "zero-padding". This means that each sequence is padded by zeros. For example, given a vocabulary where each word is related to some index number, we can represent a sentence with length 4,
"I am very confused"
by
[23, 455, 234, 90]
Padding it to achieve a max_seq_len=7, we obtain a sentence represented by:
[23, 455, 234, 90, 0, 0, 0]
The index 0 is not part of the vocabulary.
Another method to pad is to add a padding character, e.g. "<<pad>>", in our sentence:
"I am very confused <<pad>>> <<pad>> <<pad>>"
to achieve the max_seq_len=7. We also add "<<pad>>" in our vocabulary. Let's say it's index is 1000. Then the sentence is represented by
[23, 455, 234, 90, 1000, 1000, 1000]
I have seen both methods used, but why is one used over the other? Are there any advantages or disadvantages comparing zero-padding with character-padding?
AI: If implemented properly, there should be no difference. The very first thing that happens with the indices is corresponding embeddings are loaded. From this perspective, there is no difference between having the pad embedding at the 0th or at the 1000th position.
When you use padding, you should always do masking on the output and other places where it is relevant (i.e., when computing the attention distribution in attention), which ensures no gradient gets propagated from the "non-existing" padding positions. |
H: How can i increase the memory of Jupyter?
What I have:
I have a data set (35989 rows × 16109 columns) and is unfortunately confidential.
But i receive this error massage:
Unable to allocate 4.32 GiB for an array with shape (16109, 35994) and data type float64
How can i solve this problem?
AI: Assuming you cannot add more memory to your computer (or free up some of the memory), you could try 2 general approaches:
Read only some of the data into memory e.g. a subset of the rows or columns.
reduce the precision of the data from float64 to float32.
From your error, it looks like you are loading data into a numpy array, so somewhere in your code, you would need to add this argument to the array creation step e.g. np.array(your_data, dtype=np.float32).
EDIT:
I don't think this is a rate limiting problem, or a max_buffer_size issue from Tornado (the library behind Jupyter).
You can try to see if your machine is actually running out of memory by using a tool called htop - just execute htop in a terminal (or first sudo apt install htop if it isn't already installed). That shows the total amount of memory (RAM) available on your machine, it looks something like this:
This example shows the machine has 16 cores and 62.5 Gb memory - 6.14Gb of that is occupied.
Watch that view while you run your code. If the memory bar becomes full before the crash, you know you ran out of RAM. |
H: Changing order of LabelEncoder() result
Assume I have a multi-class classification task. The labels are:
Class 1
Class 2
Class 3
After LabelEncoder(), the labels are transformed into 0-1-2.
My questions are:
Do the labels have to start from 0?
Do the labels have to be sequential?
What happens if I replace all label 0s with 3 so that my labels are 1-2-3 instead of 0-1-2 (This is done before training)
If the labels were numeric such as 10-100-1000, will I still have to use LabelEncoder() to encode them into 0-1-2?
AI: Do the labels have to start from 0?
No it doesn't matter where they start as long as they have distinct values.
Do the labels have to be sequential?
Well it depends from the feature. For example if you have features that are showing order of magnitude, like small<big<vast, then yes the order matters and they are called ordinal features, but if the feature's values represent for example countries then there is no such thing as order, so probably one should use OneHotEncoder, in order to be equally distanced in space. (see here)
What happens if I replace all label 0s with 3 so that my labels are 1-2-3 instead of 0-1-2 (This is done before training)
Except the previous bullet, one should consider the type of model that will use. For example tree based model like RandomForest work very well with categorical data, and the numerical value of a category could be arbitrary. But this is not the case for the linear models.
Closing if you want to convert a categorical feature to numerical values, you should consider two things the features values (ordinal?) and the type of model.
P.S. To improve the performance of the model there are many way to convert categorical to numerical features, like Target encoding techniques, that have been showing to improve also tree based classifiers, but perhaps this is a conversation for another time :) |
H: RF regressor for probabilites
I am using sklearn multioutput RF regressor to learn statistics in my data. So my target contains several probabilities for the different features, and the sum of all these probabilities is one as they are fractions of how often the feature occurs.
The RF actually learns this property even though I have not enforced anywhere that the outputs should sum to one. I have also added a constant to my targets and the RF then learn that the outputs should sum to one plus that constant, so it is not some normalization.
I'm pretty sure I know how an RF regressor works but I cant explain how it can learn such metafeatures of my data. I would have expected the sum of my output to be somewhere around 1, not always exactly one.
Any ideas?
AI: This is indeed expected behavior, because of the way tree models handle multioutput problems. The nodes contain some number of samples, and the score for each output is the average of those samples' corresponding output. Since averaging commutes with sums, the property of summing to 1 is preserved. I'm not sure if this will help, but in symbols:
$$ \sum_{\text{output }i} p_i = \sum_{\text{output }i} \left(\operatorname*{avg}_{\text{sample }j}(p_i^j)\right) = \operatorname*{avg}_j \left(\sum_{\text{output }i} p_i^j\right) = \operatorname*{avg}_j 1 = 1.$$
Then for the entire forest, you're just applying another averaging, and so the property is again maintained. |
H: What is the input of LSTM network?
Hello I am trying to understand LSTMs but have a few problems:
What is the input? Since LSTM is seq2seq I would think it is a sequence of words, but in a Codecademy lesson is mentioned that each sentence is represented as a matrix with a bunch of vectors containing 1 or 0 for the timestep -> sentence "I like Bobo" like = [0, 1, 0], so what is now the input? The matrix or the sequence of words?
What is passed to the next LSTM cell after a prediction before was false? Since the false prediction is noted in the hidden state, how does the network know whether previous predictions were false? Or does it even know when predicting the next step?
I am excited for the answers,
love Phiona.
AI: The input of an LSTM is a sequence of vectors. In your case, each of these vectors represents a word encoded as a one-hot vector. One-hot encoding is a way to express a discrete element (e.g. a word) numerically. Each one-hot vector is a vector of length $d$, where $d$ is the total number of words we can represent, and where all positions in the vector are 0 except the position associated with the represented word, which contains a 1.
The hidden state passed to the next LSTM cell is not the final binary prediction, but the dense numerical vectors we obtain before computing the binary prediction. |
H: How to select the best parameters for GridSearchCV?
I've created a couple of models during some assignments and hackathons using algorithms such as Random Forest and XGBoost and used GridSearchCV to find the best combination of parameters. But what I'm not able to understand is how to select those parameters for GridSearchCV. I randomly put the parameters such as
params = {"max_depth" : [5, 7, 10, 15, 20, 25, 30, 40, 50,100],
"min_samples_leaf" : [5, 10, 15, 20, 40, 50, 100, 200, 500, 1000,10000],
"criterion": ["gini","entropy"],
"n_estimators" : [10, 15, 20, 40, 50, 75, 100,1000],
"max_features" : ["auto", "sqrt","log2"]}
But how do I decide if I could select better parameters which might be computationally better as well? I can't use the same above parameters for a Random Forest Classifier every single time surely?
AI: That is indeed a drawback with grid search strategy, since you must know in advance each one of the possible combinations to try out, and that might be not optimal neither to get the best evaluation metric value nor in computation performance.
You have other interesting strategies, not exhaustive hyperparameter search, for instance random search or based on bayesian tuning, for a more efficient search and being a "more clever" search strategy in the second option.
You can have a look at HyperOpt library with several optimization algorightms (see also this link for a practical use case), and more recently Keras released a nice keras tuner (which I love by the way).
You can also have a look at this answer for a worked out example on a XGB model using Hyperopt, and this one for using keras tuner.
You can also check the keras tuner wrapper for sklearn models: https://keras-team.github.io/keras-tuner/documentation/tuners/#sklearn-class |
H: How to run a saved tensorflow model in the browser?
After doing my hello world models, I would like to let them available at Github pages, which means that I need to serve the model only with static files. Is it possible?
All the tutorials I found requires nodejs or some backend
AI: You can find exactly that at this site, which offers a handwritten digit recognizer as a static site served from github pages.
Here you can find the article describing how the author did it. You can also have a look at the github repo or directly the html file to understand the how it works. The author also released the colab notebook used to train the model. |
H: LDA topic model has 0-weight topics, is that normal?
While experimenting with different number of topics for the Gensim implementation of LDA, I found that for a high number of topics, the output often consists of topics with all weights equal to zero. Is this an indication of an implementation mistake or is this normal and just an indication that I should use fewer topics?
AI: It's normal: LDA tries to maximize the likelihood of the data according to the parameters by finding the right probabilities for the parameters. Usually at the beginning increasing the number of topics allows the model to separate topics more precisely and therefore obtain a higher likelihood. But at some point (depending on the data), increasing the number of topics cannot help the model anymore because the topics are already separated to the maximum and using all the topics would actually decrease the likelihood.
So it's a sign that you don't need that many topics. Note that it doesn't mean that the number of "used" topics is optimal for the application, it's often a balance to find. |
H: Function growing faster for negative inputs than for positives
I am working on a regression problem where I want to model the loss function in a way that it "punishes" to big errors much more than small errors (so I am in the realm of exponential functions) but also in a way that is punishes a negative error much more than a positive error.
So for example:
Prediction off by +4.0: is a problem, but still ok
Prediction off by +0.5: not a big deal
Prediction off by -0.5: is a problem, but still ok
Prediction off by -4.0: is a major problem
My problem is that I cant find a good function to describe this. x squared and so do not have the higher values for negative inputs that I am looking for.
My best workaround for now is to just move the whole function to the right (x-2)^2, but there must be something better?
AI: This should be possible using a piecewise exponential loss, something like this:
$
f(x) =
\begin{cases}
x^2 & x < 0\\
\lambda x^2 & x \ge 0
\end{cases}
$
with $0 < \lambda < 1$. A $\lambda$ of around 0.02 should roughly give you the scale you want. |
H: Replace part column value with value from another column of same dataframe
I have a dataframe with two columns:
Name DATE
Name1 20200126
Name2 20200127
Name#DATE# 20200210
I need to replace all the #DATE# with the data from the DATE column, and get something like this:
Name
Name1
Name2
Name20200210
How can I achieve this? I've tried things like this, without any good result..:
df_merged_tables["Name"].str.replace("#DATE#",merged_tables["DATE"])
Thanks!
AI: Your solution is close
Maybe just needed to add an apply.
Try:
df = pd.DataFrame({"Name":["Name1", "Name2", "Name#DATE#"], "Date":[20200126, 20200127, 20200210]})
df["NewColumn"] = df.apply(lambda row: row["Name"].replace("#DATE#", str(row["Date"])), axis = 1)
Outputs: |
H: Extracting Names using NER | Spacy
I'm new to NER and I've been trying to extract names using Spacy. Here's my code:
import spacy
spacy_nlp = spacy.load('en_core_web_sm')
doc = spacy_nlp(text.strip())
# create sets to hold words
named_entities = set()
money_entities = set()
organization_entities = set()
location_entities = set()
time_indicator_entities = set()
for i in doc.ents:
entry = str(i.lemma_).lower()
text = text.replace(str(i).lower(), "")
# Time indicator entities detection
if i.label_ in ["TIM", "DATE"]:
time_indicator_entities.add(entry)
# money value entities detection
elif i.label_ in ["MONEY"]:
money_entities.add(entry)
# organization entities detection
elif i.label_ in ["ORG"]:
organization_entities.add(entry)
# Geographical and Geographical entities detection
elif i.label_ in ["GPE", "GEO"]:
location_entities.add(entry)
# extract artifacts, events and natural phenomenon from text
elif i.label_ in ["ART", "EVE", "NAT", "PERSON"]:
named_entities.add(entry.title())
The model seems to have a decent accuracy with certain kinds of names. However it is unaware of how people’s names can differ around the world (not adapted to suit cultural differences). Is there a possible workaround to avoid this bias?
AI: The NER model performance on a particular text depends on which data it was trained with originally, and naturally the standard models (like en_core_web_sm) are trained with English data which doesn't contain a lot of names from non-US/UK origin (same for other kinds of entities like organizations or locations).
Better performance can be achieved by training your own model with your own labelled data, but that requires you (or somebody) to annotate a reasonably large sample of data manually. |
H: Tricky stacking models in keras
I'm trying to write a model with keras, that is built as shown below:
| +-----+
+->+------+ | |
+--->| NN |------>| |
| +------+ | |
| | | |
| +->+------+ | |
+--->| NN |------>| L |
| +------+ | S |------+-----> Output value
| | T | |
| ... | M | |
| | | | |
| +->+------+ | | |
+--->| NN |------>| | |
| +------+ | | |
| +-----+ |
| +-------+ |
+----| Delay |-------------------+
+-------+
I have several simple sequential models (marked as NN), that receive two numerical value as input, they calculate some other numeric values (one per each network). These values are passed to LSTM network, which produces a single value as an output and this value additionaly is passed to initial networks (possibly with some delay) as one of two inputs. I work with time series, so calculated final value is passed to network alongside with the next time series value.
I use LSTM to store some "state". It is not quite difficult to build separate "sub-models", but I don't realize, how to join them together as I need, i.e. how to make a final output to be passed to initial networks and how to stack them in the described way (not as a chain).
What I've found: I found keras.layers.Concatenate, but it seems not to be what I'm looking for... But maybe (I hope) I'm mistaken.
AI: You can use the below type code.
import tensorflow as tf
from tensorflow.keras.layers import Input, Dense, LSTM, Concatenate
from tensorflow.keras.models import Model
# input of first NN
input_l1 = Input(shape=(2,))
out_l1 = Dense(1)(input_l1)
# input is 2nd NN
input_l2 = Input(shape=(2,))
out_l2 = Dense(1)(input_l2)
# concat layer output shape will be (None, 2) becuase we concatinated 2 dense layer outputs/
concat_vec = Concatenate()([out_l1, out_l2])
# we need 3d input to LSTM i.e. ( Batch_Size, no of time steps, feature space)
# We have 2 inputs so expanded dim to (None, 2, 1)
expanded_concat = tf.expand_dims(concat_vec, axis=2)
# LSTM
lstm_out = LSTM(15)(expanded_concat)
model = Model(inputs=[input_l1, input_l2], outputs=lstm_out) |
H: Comparing ML models to baselines
When comparing ML models with baseline or "dummy" models, are there best practices for building and comparing baselines?
I'm doing a binary classification task where 40% of the samples are class_0 (untreated class), and the other 60% are class_1 (treated/positive class).
I have two baselines: baseline_0 predicts randomly, and baseline_1 predicts class_1 every time.
Because the metrics are calculated relative to class_1, when baseline_1 predicts class_1 on every sample, it ends up with perfect recall (1.0), which inflates the f1 score. Does this mean this baseline model is not good for comparing to my experimental models and I should use baseline_0 instead, or that baseline_1 is good but that the f1 score is not good for making these comparisons?
AI: This depends on what you want to show.
When working with metrics you shouldn't just take the value as is, but see what each metric are telling you. baseline_1 isn't better/worse than baseline_0 because it has a higher/lower value in metric X. Both baselines give an interesting perspective on a given dataset and if unsure I'd suggest keeping both.
A couple of notes:
when saying baseline, I will refer to the two baseline strategies that you said in your post
I will use the accuracy metric for examples but what I'm saying is true for any metric.
Why use baselines?
People usually tend to see accuracy (or other measures) as absolute values. E.g. accuracy=0.9? "very good", accuracy=0.3? "very bad". This isn't true however, as metrics are influenced by the number of classes and the proportion of samples between them.
However an accuracy of 0.3 in a classification task with 1000 classes is arguably much harder to achieve than an accuracy of 0.9 on a binary classification task (assuming class balance in both cases).
Here is were baselines come in. They can show how much better a model is than a dump classification strategy.
How baselines help?
Baselines help by putting a lower bound on your metrics. For example an accuracy of 0.55 on a binary classification task is slightly better than random, but the same accuracy on a 10-class setting is much better. Baselines help quantify that and tell you how much better you are than predicting random or the most common values.
What effect do baselines have?
Now on to why keep both baselines:
The first baseline (i.e. random) helps show how metrics can be influenced by the number of classes has on the dataset.
The second baseline (i.e. most common) helps show how metrics can be influenced by class imbalance.
How baselines actually help?
Let's you have two models, one with an accuracy of 0.92 and another with an accuracy of 0.93. How much better is the second model to the first? This depends on the value of your baseline. If you have a baseline accuracy of 0.5 then both models are relatively strong and the difference is not that significant. If you have a baseline of 0.9 then the models aren't as strong and an improvement of that magnitude is more significant. |
H: Approach for training multilingual NER
I am working on multilingual (English, Arabic, Chinese) NER and I met a problem: how to tokenize data?
My train data provides sentence and list of spans for each named entity.
e.g.
[('The', 'DT'),
('company', 'NN'),
('said', 'VBD'),
('it', 'PRP'),
('believes', 'VBZ'),
('the', 'DT'),
('suit', 'NN'),
('is', 'VBZ'),
('without', 'IN'),
('merit', 'NN'),
('.', '.')]
[('你', 'PN'),
('有', '.'),
('没', 'AD'),
('有', '.'),
('用', 'VV'),
('过', '.'),
('其它', 'DT'),
('药品', 'NN'),
('?', 'PU')]
What the best way to tokenize input data? There are main different alternatives I consider: word level, wordpiece level, BPE.
BPE does not work with Chinese and Arabic because of unicode thing. I doubt about word level because I am not sure what is a word in Chinese.
What can you recommend me?
AI: First, some clarification: BPE does work with Chinese and Arabic.
The only problem with Chinese is that there are no blanks between words, and therefore there is no explicit word boundary. In order to address that problem, normally you would segment words before applying BPE. For that, the typical approach is to use Jieba or any of its multiple ports to other programming languages. Other languages without blanks, like Japanese, may have their own tools to perform word segmentation.
Now, the answer:
Subword vocabularies are the norm nowadays. The norm also consists of finetuning one of the many pre-trained neural models available (BERT, XLNet, RoBERTa, etc), and the tokenization is imposed by the model you choose.
BPE is, in general, a popular choice for tokenization, no matter the language. Lately, the unigram tokenization is becoming popular also.
I suggest you take a look at the recent tner library, which builds on top of Huggingface Transformers to make NER very easy. |
H: What if My Word is not in Bert model vocabulary?
I am doing NER using Bert Model. I have encountered some words in my datasets which is not a part of bert vocabulary and i am getting the same error while converting words to ids. Can someone help me in this?
Below is the code i am using for bert.
df = pd.read_csv("drive/My Drive/PA_AG_123records.csv",sep=",",encoding="latin1").fillna(method='ffill')
!wget --quiet https://raw.githubusercontent.com/tensorflow/models/master/official/nlp/bert/tokenization.py
import tensorflow_hub as hub
import tokenization
module_url = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/2'
bert_layer = hub.KerasLayer(module_url, trainable=True)
vocab_file = bert_layer.resolved_object.vocab_file.asset_path.numpy()
do_lower_case = bert_layer.resolved_object.do_lower_case.numpy()
tokenizer = tokenization.FullTokenizer(vocab_file, do_lower_case)
tokens_list=['hrct',
'heall',
'government',
'of',
'hem',
'snehal',
'sarjerao',
'nawale',
'12',
'12',
'9999',
'female',
'mobile',
'no',
'1155812345',
'3333',
'3333',
'3333',
'41st',
'3iteir',
'fillow']
max_len =25
text = tokens_list[:max_len-2]
input_sequence = ["[CLS]"] + text + ["[SEP]"]
print("After adding flasges -[CLS] and [SEP]: ")
print(input_sequence)
tokens = tokenizer.convert_tokens_to_ids(input_sequence )
print("tokens to id ")
print(tokens)
```
AI: The problem is that you are not using BERT's tokenizer properly.
Instead of using BERT's tokenizer to actually tokenize the input text, you are splitting the text in tokens yourself, in your token_list and then requesting the tokenizer to give you the IDs of those tokens. However, if you provide tokens that are not part of the BERT subword vocabulary, it will not be able to handle them.
You must not do this.
Instead, you should let the tokenizer tokenize the text and then ask for the token IDs, e.g.:
tokens_list = tokenizer.tokenize('Where are you going?')
Remember, nevertheless, that BERT uses subword tokenization, so it will split the input text so that it can be represented with the subwords in its vocabulary. |
H: Could you generate search queries to poison data analysis by a search engine?
A simple problem with search engines is that you have to trust that they will not build a profile of search queries you submit. (Without Tor or e.g. homomorphic encryption, that is.)
Suppose we put together a search engine server with a use policy that permits constant queries being sent by paid customers.
The search engine's client transmits, at some frequency, generated search queries (e.g. markov, ML-generated, random dictionary words, sourced from news, whatever; up to you) in order to intentionally obscure the real search queries performed by customers. In other words it pretends to be a thousand contradictory personalities, nationalities, genders, races, hobbies, etc.
How difficult would it be to generate enough queries to hide yourself in the data?
AI: I am not sure how many queries you'd need to perform to drown out your actual search queries, but there is already is an actual browser addon which does this. This addon is called TrackMeNot and is available to install for both Google Chrome and Firefox. More in-depth information on how this addon works can be found on their website and the whitepaper (section 3), but in short it create a dynamic list of queries based on popular search terms. |
H: Decoder Transformer feedforward
I have a question about the decoder transformer feed forward during training.
Let's pick an example: input data "i love the sun" traduction i want to predict (italian traduction) "io amo il sole".
Now i feed the encoder with the input "i love the sun" and i get the hidden states.
Now i have to do multiple feed forwards on the decoder with the input "BOS io amo il"
where BOS is a token that stands for beginning of sentence.
So i have this feedforward i assume
[BOS, IO, AMO, IL] -> decoder -> IO
[BOS, IO, AMO, IL] -> decoder -> AMO
[BOS, IO, AMO, IL] -> decoder -> IL
[BOS, IO, AMO, IL] -> decoder -> SOLE
I think this is the correct way. And what should be applied to differentiate the training i think is the masked attention mechanism maybe(?)
is it right to assume that the masking will be
[1 0 0 0,
0 0 0 0 ,
0 0 0 0,
0 0 0 0] for the first feed forward
[1 0 0 0,
1 1 0 0 ,
0 0 0 0,
0 0 0 0] for the second feed forward
[1 0 0 0,
1 1 0 0 ,
1 1 1 0,
0 0 0 0] for the third feed forward
[1 0 0 0,
1 1 0 0 ,
1 1 1 0,
1 1 1 1] for the fourth feed forward
is it the correct way? or what should be different?
If you can provide me also a python implementation could be useful, thanks in advance.
AI: There are some problems with your description:
During training, the decoder receives all the shifted target tokens, prepending the BOS token. You removed sole. The actual input would be: [<bos>, io, amo, il, sole]. Note that the output at the position of sole would be the end-of-sequence token <eos>.
During training, there is a single forward pass (not one per token), and all the output tokens are predicted at once. Therefore, only the last one of your attention masks is used.
During inference, we don't have the target tokens (because that is what we are trying to predict). In this case, we have one pass per generated token, starting with <bos>. This way, the decoder input in the first step would just be the sequence [<bos>], and we would predict the first token: io. Then, we would prepare the input for the next timestep as [<bos>, io], and then we would obtain the prediction for the second token. And so on. Note that, at each timestep, we are repeating the computations for the past positions; in real implementations, these states are cached instead of re-computed each timestep.
About some piece of Python code illustrating how the Transformer works, I suggest The annotated Transformer, which is a nice guide through a real implementation. You may be most interested in the function run_epoch for the training and in the function greedy_decode for the inference. |
H: Vertical concatenation in a df based on column value_python
Is there any way to concatenate vertically a specific column from my df (concat1), considering/filtering values from another column (col_value)?
My df looks like this:
col_value concat1
data1 x;y;z
data1 d;f;h
data1 p;c;j
data2 s;k;a
data3 a;w;q
data2 o;i;s
data3 e;q;j
data4 d;f;n
data4 q;f;k
Expected output:
col_value vertical_concat
data1 x;y;z;d;f;h;p;c;j
data1 x;y;z;d;f;h;p;c;j
data1 x;y;z;d;f;h;p;c;j
data2 s;k;a;o;i;s
data3 a;w;q;e;q;j
data2 s;k;a;o;i;s
data3 a;w;q;e;q;j
data4 d;f;n;q;f;k
data4 d;f;n;q;f;k
Many thanks in advance
AI: Try:
a = ["data1"
,"data1"
,"data1"
,"data2"
,"data3"
,"data2"
,"data3"
,"data4"
,"data4"]
b = ["x;y;z"
,"d;f;h"
,"p;c;j"
,"s;k;a"
,"a;w;q"
,"o;i;s"
,"e;q;j"
,"d;f;n"
,"q;f;k"]
e = pd.DataFrame({"col_value":a,"concat1":b})
e["vertical_concat"] = e.groupby("col_value").transform(lambda x: ";".join(x.unique()))
Output: |
H: Calculating correlation for categorical variables
I am struggling to find out a suitable way to calculate correlation coefficient for categorical variables. Pearson's coefficient is not supported for categorical features. I want to find out features with most highest influence on the target variable. My objectives are:
Correlation between categorical and categorical variables. e.g. For a binary target (like Titanic dataset), I want to find out what is the influence of a category on the target (like, influence of gender on survival (0/1))
Capture some non linear dependencies. e.g. For supermarket sales data, the sales is usually higher during weekends as people might visit such store more during holidays. So we expect to see spikes at an interval of roughly 7 days. Is there any way to capture this non-linearity/seasonality by a correlation coefficient?
AI: According to The Search for Categorical Correlation post on TowardsDataScience, one can use a variation of correlation called Cramer's association.
Going categorical
What we need is something that will look like correlation, but will
work with categorical values — or more formally, we’re looking for a
measure of association between two categorical features. Introducing:
Cramér’s V. It is based on a nominal variation of Pearson’s Chi-Square
Test, and comes built-in with some great benefits:
Similarly to correlation, the output is in the range of [0,1], where 0 means no association and 1 is full association. (Unlike
correlation, there are no negative values, as there’s no such thing as
a negative association. Either there is, or there isn’t)
Like correlation, Cramer’s V is symmetrical — it is insensitive to swapping x and y
def cramers_v(x, y):
confusion_matrix = pd.crosstab(x,y)
chi2 = ss.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum().sum()
phi2 = chi2/n
r,k = confusion_matrix.shape
phi2corr = max(0, phi2-((k-1)*(r-1))/(n-1))
rcorr = r-((r-1)**2)/(n-1)
kcorr = k-((k-1)**2)/(n-1)
return np.sqrt(phi2corr/min((kcorr-1),(rcorr-1))) |
H: Problem with binning
I am trying to change continuous data points to categorical by using binning. I know two techniques, i) equal width bins ii) bins with equal number of elements.
My questions are:
Which type of binning is appropriate for which kind of problem?
I use pandas for my data analysis task and it has pd.cut method for arbitrary binning which I use for equal wdith bins and pd.qcut method for bins with equal number of elements. The second function always produces very complicated bin boundaries (like, [(-28.004,795.8976],(795.8976,900.342]]). Is there any way to "control" the bin boundaries so that they look more meaningful to non-technical persons?
Thanks in advance.
AI: The two methods you're citing belong to what is called unsupervised binning, including as you said equal width and equal frequency binning. On the other hand, supervised binning broadly tries to make sure bins are made in majority of instances sharing the same class label.
For both types of unsupervised binning, i.e. equal frequency and equal width, the best way is still to give it a try and select based on the observation of the resulting histogram you get. If your data is not properly divided by bins of equal frequency, maybe equal width bins would help, and vice versa.
For what concerns Pandas execution, you can pass a precision argument to qcut, this should return more "comprehensible" bin limits, as shown below
>>> array = np.random.randn(10)
>>> pd.qcut(array, q=4, precision=3)
Categories (4, interval[float64]): [(-1.8889999999999998, -0.732] < (-0.732,
-0.136] < (-0.136, 0.973] < (0.973, 1.543]]
>>> pd.qcut(array, q=4, precision=0)
Categories (4, interval[float64]): [(-3.0, -1.0] < (-1.0, -0.0] < (-0.0, 1.0]
< (1.0, 2.0]] |
H: Read csv file and save images from the output
I have created some code that reads my CSV file and converts the dataset to a grayscale image. I want to know if there is any possible way to read through each row in the dataset and save each of the images created from the rows?
So far, I have got this code that reads the CSV files and creates an image using .imshow
import pandas as pd
import numpy as np
from sklearn.datasets import load_digits
from keras.preprocessing.image import array_to_img
import matplotlib.pyplot as plt
data_path = "dataset_malwares.csv"
data = pd.read_csv(data_path);
label = data.Malware.values
data = data.drop("Malware", axis=1)
data = data.drop("Name", axis=1)
data = data.values
data = data.reshape(data.shape[0], data.shape[1], 1)
data = np.tile(data, (1, data.shape[1]))
plt.imshow(data[1], cmap="gray")
plt.title("label: {0:}".format(label[1]))
plt.show()
print(data[0].shape)
I want to go through the dataset and save each image but not too sure where to start. Any suggestions would be great. Thanks :)
Rows/format of the data - I've provided a shared version of the dataset:
https://1drv.ms/x/s!AqFNg8FC48SSgtZSObDmmGHs3utWog
AI: Are you looking to simply save the file produced by plt.imshow? If yes, then you should be able to use plt.savefig as follows:
plt.imshow(data[1], cmap="gray")
plt.title("label: {0:}".format(label[1]))
plt.savefig("output_image.png")
plt.show()
You can either remove or keep the plt.show() call depending on whether you want the image to still be shown or just save the image.
EDIT: If you want to do this for all rows you can just loop through the numpy array as follows:
for i in range(data.shape[0])):
plt.imshow(data[i], cmap="gray")
plt.title("label: {0:}".format(label[1]))
plt.savefig(f"output_image_{i}.png")
plt.close() |
H: Very bad results for input-output mapping using an Artifical Neural Network
I'd like to hear the opinion of an expert for artifical neural networks on a problem that I try to solve. I just started to use articial neural networks and want to train an ANN with 3 inputs and 3 outputs by using 3375 data points. The goal is to map the 3 inputs to the 3 outputs. For that purpose I use a multilayer percetron implemented in tensorflow and keras.
I thought that normally a ANN is especially good in doing such kind of input-output mapping. However, the results are extremely bad. I varied everything many many times with huge differences in the values (batch size, epochs, number of hidden layers, number of neurons, error functions), however the results remain extremely bad (e.g. val_mean_absolute_percentage_error: 2360328448.0000). The mapping is so extermely wrong that it is not at all useful. What suprises me that even using inputs from the training dataset lead to disastrous outputs.
This is why I would like to hear your opinion on that. Am I doing something completely wrong or is my assumption that ANNs are especially good for such input-output mapping in this case just not true? Or maybe there is an issue with the training data? I'd highly appreciate any comments and advice from you because I do not know what else to do.
Here you can see the code:
# For data manipulation
import numpy as np
import pandas as pd
#For plotting
from matplotlib import pyplot as plt
# For building model and loading dataset
import tensorflow as tf
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow import keras
#Load the data
dataframe = pd.read_csv("C:/Users/User1/Desktop/ANN_inputs_outputs.csv", sep =";")
dataset = dataframe.values
# Assign the columns of the dataframe to the inputs for arrays for the ANN
X_input_dataset = dataset[:, 1:4]
Y_output_dataset = dataset[:, 4:7]
#Create the model
#Input shape defiens the number of input neurons
input_shape = (3,)
#Sequential model is just one for a vanilla MLP
model = Sequential()
#Add the different layers
model.add(keras.layers.Flatten(input_shape=(3,))),
model.add(Dense(20, activation='relu'))
model.add(Dense(40, activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(3, activation='linear'))
# Configure the model and start training
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mean_absolute_percentage_error'])
history = model.fit(X_input_dataset, Y_output_dataset, epochs=100, batch_size=10, verbose=1, validation_split=0.2)
#Plot training results
history_dict = history.history
print(history_dict.keys())
plt.plot(history.history['mean_absolute_percentage_error'])
plt.plot(history.history['val_mean_absolute_percentage_error'])
plt.title('Mean absolute percentage errror')
plt.ylabel('Mean absolute percentage errror')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('Loss function')
plt.ylabel('mean absolute error')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
#predict values
x_new = [(100000,100000,100000), (100000,1000000,500000),
(100000,100000,100000), (100000,500000,100000),
(100000,100000,100000), (500000,100000,100000)]
y_new = model.predict(x_new)
print(y_new)
Unfortunately the data is too big such that I can't share it directly via StackExchange (I tried this). This is why I uploaded the csv file to File Dropper CSV_File. If you do not want to download the data from there, please tell me another source/way how I could share the data with you.
I do not know if this helps but here you can at least see the first 100 datapoints out of the 3375 (in the full data I varied every input and created all combinations of inputs):
Input_1 Input_2 Input_3 Output_1 Output_2 Output_3
100000 100000 100000 81.63842992 336.0202553 142.6094997
100000 100000 200000 83.91274058 353.0797849 123.2756595
100000 100000 300000 86.49717207 366.4358367 107.3351762
100000 100000 400000 87.94279678 376.396602 95.92878625
100000 100000 500000 89.57430815 384.9555939 85.73828291
100000 100000 600000 92.65738103 396.8354166 70.77538736
100000 100000 700000 96.0171678 408.3277988 55.92321845
100000 100000 800000 100.5642366 420.7969577 38.90699073
100000 100000 900000 109.0237 438.4473815 12.79710349
100000 100000 1000000 114.2438266 446.0243584 0
100000 100000 1100000 114.2438266 446.0243584 0
100000 100000 1200000 114.2438266 446.0243584 0
100000 100000 1300000 114.2438266 446.0243584 0
100000 100000 1400000 114.2438266 446.0243584 0
100000 100000 1500000 114.2438266 446.0243584 0
100000 200000 100000 92.17726716 320.8186761 147.2722417
100000 200000 200000 93.98736653 336.6494039 129.6314145
100000 200000 300000 96.92805106 349.6806425 113.6594914
100000 200000 400000 98.58276913 360.6424603 101.0429556
100000 200000 500000 100.31333 368.9105132 91.04434172
100000 200000 600000 102.6334311 377.1300392 80.50471475
100000 200000 700000 105.7178567 388.244019 66.30630933
100000 200000 800000 108.9149247 398.1848881 53.16837219
100000 200000 900000 115.571269 411.5986127 33.09830325
100000 200000 1000000 127.0748864 430.1972029 2.996095751
100000 200000 1100000 128.2092221 432.0589629 0
100000 200000 1200000 128.2092221 432.0589629 0
100000 200000 1300000 128.2092221 432.0589629 0
100000 200000 1400000 128.2092221 432.0589629 0
100000 200000 1500000 128.2092221 432.0589629 0
100000 300000 100000 100.0917771 307.9756287 152.2007792
100000 300000 200000 102.9726253 323.9279352 133.3676245
100000 300000 300000 105.6062056 335.7776535 118.884326
100000 300000 400000 107.3121984 346.883184 106.0728025
100000 300000 500000 109.4540231 354.663097 96.15106489
100000 300000 600000 111.8786604 361.5557255 86.83379908
100000 300000 700000 114.7944686 371.5938132 73.87990318
100000 300000 800000 118.1373355 380.0257011 62.10514836
100000 300000 900000 122.8548691 390.9478707 46.46544517
100000 300000 1000000 133.347506 406.5063351 20.41434392
100000 300000 1100000 141.6791937 418.5889913 0
100000 300000 1200000 141.6791937 418.5889913 0
100000 300000 1300000 141.6791937 418.5889913 0
100000 300000 1400000 141.6791937 418.5889913 0
100000 300000 1500000 141.6791937 418.5889913 0
100000 400000 100000 109.503933 294.4172255 156.3470265
100000 400000 200000 112.000167 311.1933026 137.0747154
100000 400000 300000 114.2526188 322.9057599 123.1098063
100000 400000 400000 116.4791304 333.664824 110.1242305
100000 400000 500000 118.2910122 342.0030905 99.97408228
100000 400000 600000 120.2127847 349.3045772 90.75082313
100000 400000 700000 122.7641259 356.8196711 80.68438801
100000 400000 800000 126.3291166 365.4701912 68.46887722
100000 400000 900000 130.0423749 374.3468141 55.87899601
100000 400000 1000000 137.5204755 386.0880788 36.65963063
100000 400000 1100000 148.9375577 401.141397 10.18923033
100000 400000 1200000 152.8379613 407.4302237 0
100000 400000 1300000 152.8379613 407.4302237 0
100000 400000 1400000 152.8379613 407.4302237 0
100000 400000 1500000 152.8379613 407.4302237 0
100000 500000 100000 117.4879678 283.3733734 159.4068438
100000 500000 200000 121.0579184 298.9825928 140.2276737
100000 500000 300000 123.3707729 310.3330953 126.5643168
100000 500000 400000 125.8724146 320.3948833 114.0008871
100000 500000 500000 127.9615773 328.059964 104.2466436
100000 500000 600000 129.5606613 335.4906683 95.21685541
100000 500000 700000 131.4170772 343.7065728 85.14453506
100000 500000 800000 135.3015477 351.1570032 73.80963419
100000 500000 900000 137.8813788 359.0228767 63.36392947
100000 500000 1000000 144.8898942 370.7656611 44.61262969
100000 500000 1100000 154.7571144 383.9513348 21.55973576
100000 500000 1200000 164.1907262 396.0774588 0
100000 500000 1300000 164.1907262 396.0774588 0
100000 500000 1400000 164.1907262 396.0774588 0
100000 500000 1500000 164.1907262 396.0774588 0
100000 600000 100000 124.7561636 274.0110713 161.50095
100000 600000 200000 128.42286 288.8038063 143.0415186
100000 600000 300000 131.2377241 299.8006811 129.2297798
100000 600000 400000 133.8838584 309.2976404 117.0866862
100000 600000 500000 135.5491074 317.3956571 107.3234204
100000 600000 600000 137.8437737 324.061017 98.36339426
100000 600000 700000 139.5148534 331.0105966 89.74273491
100000 600000 800000 143.1729967 338.4279821 78.66720613
100000 600000 900000 146.6596817 344.9709227 68.63758054
100000 600000 1000000 150.9572297 353.3162164 55.9947389
100000 600000 1100000 159.4602916 366.5904292 34.21746416
100000 600000 1200000 171.026723 381.1619306 8.079531382
100000 600000 1300000 175.4286096 384.8395754 0
100000 600000 1400000 175.4286096 384.8395754 0
100000 600000 1500000 175.4286096 384.8395754 0
100000 700000 100000 132.1183934 264.1955984 163.9541932
100000 700000 200000 135.9907245 278.9421043 145.3353562
100000 700000 300000 138.9508032 289.1447258 132.1726561
100000 700000 400000 141.3695688 299.1572684 119.7413478
100000 700000 500000 143.2089855 306.5047858 110.5544137
100000 700000 600000 145.4980373 313.8396234 100.9305243
100000 700000 700000 147.7033751 319.6546207 92.91018914
100000 700000 800000 150.8276735 327.3557851 82.08472648
100000 700000 900000 153.528077 333.3811995 73.35890853
100000 700000 1000000 156.9484214 339.9871429 63.3326207
100000 700000 1100000 164.6661346 352.2010019 43.40104855
AI: The issue seems to be with the Keras mean_absolute_percentage_error
Check this SO Answer - Link
In your case,
You have one output=0 in the 3rd col of Y
If you run the model for just one Y column, it will work fine for the first two columns
any(dataset[:, 6:7]==0)
Output - True
I just added a One to remove the 0. It is working fine.
X_input_dataset = dataset[:, 1:4]
X_input_dataset = (X_input_dataset - X_input_dataset.mean())/X_input_dataset.std()
Y_output_dataset = dataset[:, 4:7]
Y_output_dataset[:,-1] = Y_output_dataset[:,-1]+1.0
You can do,
- Handle the 0
- Use mse as metric and calculate MAPE separately
- Write your own custom Metric |
H: How to interpret the Mean squared error value in a regression model?
I'm working on a simple linear regression model to predict 'Label' based on 'feature'. The two variables seems to be highly correlate corr=0.99. After splitting the data sample for to training and testing sets. I make predictions and evaluate the model.
metrics.mean_squared_error(Label_test,Label_Predicted) = 99.17777494521019
metrics.r2_score(Label_test,Label_Predicted) = 0.9909449021176512
Based on the r2_score my model is performing perfectly. 1 being the highest possible value. But when it comes to the mean squared error, I don't know if it shows that my model is performing well or not.
How can I interpret MSE here ?
If I had multiple algorithms and the same data sets, after computing MSE or RMSE for all models, how can I tell which one is better in describing the data ?
R2
score is 0.99, is this suspicious ? Or expected since the label and
feature are highly correlated?
Feature Label
0 56171.757812 56180.234375
1 56352.500000 56363.476562
2 56312.539062 56310.859375
3 56432.539062 56437.460938
4 56190.859375 56199.882812
... ... ...
24897 56476.484375 56470.742188
24898 56432.148438 56432.968750
24899 56410.312500 56428.437500
24900 56541.093750 56541.015625
24901 56491.289062 56499.843750
AI: Whether you model is performing well or not depends on your business case, you might hive tiny RMSE or great looking score on whatever metric you are using, but it just not enough to solve the business problem, in that case the model is not performing well.
MSE is just that Mean Squared Error
Both MSE and RMSE measure by how much the predicted result deviates from actual, because of the squared term more weight is given to larger errors, and because of square root in RMSE, it is in the same units as dependent variable. MAE, Mean Absolute Error is another useful metric to look at when you are evaluating a regression model; it is also easier to interpret.
Given your data, R-squared seems fine to me. |
H: Using user defined function in groupby
I am trying to use the groupby functionality in order to do the following given this example dataframe:
dates = ['2020-03-01','2020-03-01','2020-03-01','2020-03-01','2020-03-01',
'2020-03-10','2020-03-10','2020-03-10','2020-03-10','2020-03-10']
values = [1,2,3,4,5,10,20,30,40,50]
d = {'date': dates, 'values': values}
df = pd.DataFrame(data=d)
I want to take the largest n values grouped by date and take the sum of these values. This is how I understand I should do this: I should use groupby date, then define my own function that takes the grouped dataframes and spits out the value I need:
def myfunc(df):
a = df.nlargest(3, 'values')['values'].sum()
return a
data_agg = df.groupby('date').agg({'relevant_sentiment':myfunc})
However, I am getting various errors, like the fact that the value keep is not set, or that it's not clearly set when I do specify it in myfunc.
I would hope to get a dataframe with the two dates 03-01 and 03-10 with respectively the values 12 and 120.
Any help/insights/remarks will be appreciated.
AI: You could do it simple and it should work like this:
def myfunc(df):
return df.nlargest(3, 'values')[['values']].sum()
and then:
data_agg = df.groupby('date', as_index=False).apply(myfunc)
You decide if "data_agg" is the proper name then.
Good luck! |
H: Implementing the Trapezoid rule without the formula for the curve
I know that if I have some function f(x) that describes a curve, I can approximate the area under the curve using the trapezoid rule as follows:
def auc(f, a, b, n):
subinterval = (b - a) / n
s = f(a) + f(b)
i = 1
while i < n:
s += 2 * f(a + i * subinterval)
i += 1
return (subinterval / 2) * s
However, I am trying to implement the trapezoid rule to approximate the area under the ROC curve. I don't have the function f(x), but rather true positive rates and false positive rates at thresholds from 0 to 1 spaced by .01. I tried to implement the rule following this guide https://byjus.com/maths/trapezoidal-rule/ as so:
def roc_auc(tprs, fprs):
y_sum = max(tprs) + min(tprs)
for i in range(1, len(fprs)-1):
y_sum += 2*tprs[i]
interval = (max(fprs) - min(fprs)) / len(fprs)
return ((interval / 2) * y_sum) / 100
However, when I test it against auc functions implemented in numpy, scikit, etc. I get different values than the one I calculate so I know I'm doing something wrong. Can anyone tell me where I'm going wrong?
AI: You're assuming that the points are equally spaced along the fpr axis, which is generally not true. See e.g. the "Uniform grid" vs "Nonuniform grid" sections of the wikipedia article. You need something like
delta_xs = np.diff(fpr)
left_endpoints_y = tpr[:-1]
right_endpoints_y = tpr[1:]
trap_areas = 0.5 * (left_endpoints_y + right_endpoints_y) * delta_xs
area = trap_areas.sum()
(That's more verbose than it needs to be, probably not the most efficient, and I don't know what order your fpr/tpr lists are, so it'll need some finagling.)
Using thresholds at 0.01 spacing is a little unusual too: the ROC curve is represented by using every predicted probability as a threshold (together with $\pm\infty$, or a convention that $(0,0)$ and $(1,1)$ are on the curve). |
H: Are there readily available models that can handle conditional correlation?
I've been working my way through the features of the Kaggle House Prices dataset (Note: this is a non-ranking entry, so this is just for exercises), and I've found a couple situations where there is a positive correlation between the feature and the house sale price, but only if the data exists. In one case (shown below) over 10% of the dataset had null (means "does not apply", not "missing", so I can't fill it in with imputation), but of the non-null values, a scatter plot showed a positive correlation. This looks like a conditionally useful feature and I'd like to keep it, but the nulls are tripping me up.
I experimented and replaced the null values with 0, and when I looked at the scatter plot again, I found that the values that the new 0s spanned the a good chunk of the price range and altered the trend.
Before replacing null values (blue trend line estimated by hand):
After replacing null values with 0 (~13% of the dataset):
Is there a readily available model in sklearn or some other python library that will perform a regression fit only if the data is non-null? If not, would it be best to just drop the column?
Note: This is a small dataset (<1500 entries). This is too small for neural network techniques.
AI: It is totally ok to drop null values here (in your case all the null values of LotFrontage), because this data isn't real. This is a standard part of data preprocessing called data cleaning. If the reason for the null values is known you could do additional steps like imputing or filling, but without this information just dropping them would be fine.
I would be careful changing null to zero however, as this is now creating data (i.e. apartments with zero size front lots).
Finally, you can see from your scatter plot that the relationship is not completely linear, so the linear model may have a large degree of error and/or low R squared values. So it may be more suitable to pick another algorithm for this dataset (e.g. 2D gaussian model). |
H: What's the best way to generate similar words?
Hi all I'm fairly up to date with all the NLP tasks out there (nlpprogress.com, paperswithcode.com) and great tools like (nltk, flair, huggingface etc). I want to take a single word, and predict a similar word, a little like the old "Google Sets" feature except extrapolating from a single example. I'm thinking GPT-3 might be the best bet with some seed text like
here is a list of similar things: banana,
and ask it to predict the next word.
transformer.huggingface.co is promising enough (though hilariously inadequate in itself) that I'm thinking GPT-3 indeed may well be the answer.
But the alternative is to navigate a treebank, through "type of" relationships… much, much faster and cheaper.
I've tagged this "semantic similarity" but really I don't want the relationship to be "similar", rather "is part of same set of".
thoughts most appreciated from actual practitioners in this space rather than hobbyists like me :)
AI: But the alternative is to navigate a treebank, through "type of" relationships… much, much faster and cheaper.
WordNet provides exactly this: it is a lexical database in which words are grouped by synonyms, with several types of relations between groups in particular hypernyms/hyponyms (more general/more specific).
The database can be downloaded and there is a library to use it through nltk. |
H: How does the Transformer predict n steps into the future?
I have barely been able to find an implementation of the Transformer (that is not bloated nor confusing), and the one that I've used as reference was the PyTorch implementation. However, the Pytorch implementation requires you to pass the input (src) and the target (tgt) tensors for every step, rather than encoding the input once and keep on iterating for n steps to generate the full output. Am I missing something here?
My first guesses were that the Transformer isn't technically a seq2seq model, that I have not understood how I'm supposed to implement it, or that I've just been implementing seq2seq models incorrectly for the last few years :)
AI: The Transformer is a seq2seq model.
At training time, you pass to the Transformer model both the source and target tokens, just like what you do with LSTMs or GRUs with teacher forcing, which is the default way of training them. Note that, in the Transformer decoder, we need to apply masking to avoid the predictions depending on the current and future tokens.
At inference time, we don't have the target tokens (because that is what we are trying to predict). In this case, the decoder input in the first step would just be the sequence [], and we would predict the first token. Then, we would prepare the input for the next timestep appending the prediction to the previous timestep input (i.e. []), and then we would obtain the prediction for the second token. And so on. Note that, at each timestep, we are repeating the computations for the past positions; in real implementations, these states are cached instead of re-computed each timestep.
About some piece of Python code illustrating how the Transformer works, I suggest The annotated Transformer, which is a nice guide through a real implementation. You may be most interested in the function run_epoch for the training and in the function greedy_decode for the inference.
def greedy_decode(model, src, src_mask, max_len, start_symbol):
memory = model.encode(src, src_mask)
ys = torch.ones(1, 1).fill_(start_symbol).type_as(src.data)
for i in range(max_len-1):
out = model.decode(memory, src_mask,
Variable(ys),
Variable(subsequent_mask(ys.size(1))
.type_as(src.data)))
prob = model.generator(out[:, -1])
_, next_word = torch.max(prob, dim = 1)
next_word = next_word.data[0]
ys = torch.cat([ys,
torch.ones(1, 1).type_as(src.data).fill_(next_word)], dim=1)
return ys
In greedy_decode you can see how the predictions of the current timestep are concatenated to the input to create the input for the following timestep. |
H: Do we need to do multiple times deep learning and average ROC?
Do we need to do multiple times deep learning and average ROC(AUC)?
Since we've might get different ROC(AUC) every round we train and test (by KERAS)
Is it necessary to average ROC(AUC) with multiple times training and testing?
(or just choose the best round?)
AI: No, you don't need to do that. Do probability calibration (temperature scaling) on your trained network. Here a link to do that. |
H: What is a "shot" in machine learning?
I keep on hearing this term "shot" used in machine learning.
Is a "shot" well-defined?
From what I can tell, "shot" is a synonym for "example". Most machine learning systems seem to be "multi-shot" meaning you have a huge dataset that has many different examples of different categories. However, for a system to have "one-shot" capabilities means that it is able to predict the category of something given exactly one example. Similarly, "few-shot" applications seem to only need a few examples in order to perform some function with the input. And "zero-shot" learning seems to be making predictions without any examples during training.
Is a shot just an example?
Given the evidence above, it seems like this is the case, but it also seems like it's a little bit more nuanced, something like a shot is a post-training example when 0 examples were given during training. But I'm not sure if this is right, thus the question.
AI: A shot is a single example available for machine learning. So "one-shot" means you're given just the one example for each class. |
H: Neural network: does bias equal to zero, is the same as, a layer without bias?
Question as in the title. Does bias equal to zero, is the same as, removing bias from the layer? Here's a pytorch implementation to showcase what I mean.
class MLP_without_bias(torch.nn.Module):
def __init__(self):
super().__init__()
# Bias set as False
self.linear = torch.nn.Linear(5, 3, bias = False)
# Xavier initialization
torch.nn.init.xavier_uniform_(self.linear.weight)
def forward(self, x):
return self.linear(x)
class MLP_with_bias_zero(torch.nn.Module):
def __init__(self):
super().__init__()
# Default bias set as True
self.linear = torch.nn.Linear(5, 3)
# Xavier initialization
torch.nn.init.xavier_uniform_(self.linear.weight)
# Bias initialized as zero
torch.nn.init.zeros_(self.linear.bias)
def forward(self, x):
return self.linear(x)
AI: No, they are not the same:
In MLP_without_bias the bias will be zero after training, because of bias=False.
In MLP_with_bias_zero the bias is zero at initialization, but this will not prevent it from being updated during training. |
H: Is there potentially data leakage during imputation for time-varying sensor data?
I have a time-varying dataset that contains some missing data. I have sensors that continuously monitor some properties at evenly-spaced intervals and I would like to impute the missing values using basic interpolation for both the training and test set. This is a time-series binary classification problem (e.g., based on the entire time-series present, classify as either 1 or 0). I am concerned that taking data from the future to interpolate the missing value is a form of data leakage.
My reason for believing it is not is primarily based on the fact that I am not doing forecasting. I am not trying to predict future values of these sensors, just impute the missing dynamic variables with its most likely values (in fact, based on my domain knowledge and some experiments simple interpolation is very accurate at predicting the true values). If I were trying to predict future sensor values (e.g., forecasting) this would certainly be data leakage, correct?
AI: If you are using information from the future to impute missing data would be data leakage as you would not have this extra information when the model is in production and trying to predict future values. To prevent data leakage, make sure to only use values that are available at the date/time you want to predict. If you were to impute the missing data based on historical data you would not be leaking data since you have this data available at the moment of prediction (as the term "historical" implies). |
H: Understanding features vs labels in a dataset
I am in the process of splitting a dataset into a train and test dataset. Before I start, this is all relatively new to me. So, from my understanding, a label is the output, and a feature is an input. My model will detect malware, and so my dataset is filled with malware executables and non-malware executables (which I think is known as benign?).
I have started some code that splits the dataset, although I want to clarify the difference between labels and features. So my dataset is pretty large and contains many rows and many columns. I am dropping the 'Malware' column from my dataset. I have done this by using the code below:
y = data.Malware
X = data.drop('Malware', axis=1)
which I believe is the label in my code as that is what I what my model to predict (malware or not malware). My features are all the other columns within the dataset. Would this be correct?
The link to the dataset is below for reference in case anyone needs it to help understand my question:
https://1drv.ms/x/s!AqFNg8FC48SSgtZSObDmmGHs3utWog
AI: The features are the input you want to use to make a prediction, the label is the data you want to predict. The Malware column in your dataset seems to be a binary column indicating whether the observation belongs to something that is or isn't Malware, so if this is what you want to predict your approach is correct. |
H: How much imbalance in a training set is a problem?
In a simple binary classification problem, at what point does majority class to minority class become significant become significant? Intuitively, I would expect a 3:1 ratio to not be an issue, maybe not even a 10:1 ratio. But a 100:1 ratio certainly does require some action. What might be this cutoff?
As a followup, what might be come potential solutions beyond undersampling and oversampling?
AI: There is no strict threshold at which a dataset is considered imbalanced. Accordingly, in Foundations of Imbalanced Learning Gary M. Weiss writes:
There is no agreement, or standard, concerning the exact degree of
class imbalance required for a data set to be considered truly "imbalanced."
But most practitioners would certainly agree that a data set where the most
common class is less than twice as common as the rarest class would only be
marginally unbalanced, that data sets with the imbalance ratio about 10:1
would be modestly imbalanced, and data sets with imbalance ratios above
1000:1 would be extremely unbalanced. But ultimately what we care about
is how the imbalance impacts learning, and, in particular, the ability to learn
the rare classes.
A pragmatic approach could be to fit your models on the imbalanced dataset and check if the imbalance leads to large differences in performance between the classes. But keep in mind that the ultimate goal is to minimize misclassification cost. So performance difference between classes are not necessarily a problem if classes have equal misclassification cost (usually for imbalanced datasets the underlying assumption is that misclassification cost are not equally distributed, i.e. misclassification cost for minority classes are assumed to be higher).
To handle imbalanced datasets sampling-based approaches are most common but they are not limited to under- and over-sampling. There are also hybrid (e.g. SMOTE+Tomek) and Ensemble-based methods (e.g. BalancedRandomForest).
Moreover, there are algorithmic approaches which include cost-sensitive learning (e.g. Weighted Random Forest) and skew-insensitive learning (e.g. Naïve Bayes).
Finally, you can adapt the performance metric to be used. AUROC and ROC Curves are commonly used for imbalanced datasets. However, they can be biased too. Which is why some authors suggest to use Precision-Recall-Curves and AUC-PR instead. Moreover, Precision, Recall and F1 score are common as well.
Note however, that simply using a different performance metric for model selection and model evaluation will not make your models optimize for these internally when being trained. Which is why the other approaches and especially sampling are so frequently used. |
H: Extract key phrases for binary outcome
I have a set of phrases that lead to a binary outcome (accept/reject) and I was wondering what techniques are most helpful for extracting key phrases that are most likely to determine the outcome, given that I have a training set of data that has the English-language phrase and the observed outcome.
To illustrate the idea let me give a simple example:
Accepted
Sounds great
That would be great
That's fine
key words: great, fine
Rejected
I'm not sure
I don't think so
No way
key words: not, don't, no
AI: There are a variety of techniques that you could use, depending on what you would like to do.
If your goal is to gain insight into the phrases that are being used in each group, then I'd recommend looking for the most frequent N-grams of different lengths that appear in each class. Here is a related stackoverflow question that shows how you can use nltk and sklearn to extract these.
If your goal is to predict the outcome (accept/reject) given a phrase, then I'd recommend setting this up as a binary classification problem. Since those phrases are quite short, you could start with a Bag of Words approach - the scikit-learn documentation for working with text data is a good example that guides you through the steps. |
H: Effect of removing duplicates on Random Forest Regression
I have a dataset with several million samples that have 5 features and 1 target, which I am using for a regression model. With very large sample counts some models (like Random Forests) become very large (several GB when pickled).
These data often have duplicates or near duplicates - these are real observations - but the measured values are just coincidentally identical (a consequence of the limited input range and precision of the instruments).
What is the effect (in theory) or removing duplicates on model accuracy?
AI: Your model will become less accurate.
For example, let's say you have features A and B, and you have 51 observations. For 50 of those A=10 and B=20 correspond to dependent value of 5, and you have 1 observation for which A=10 and B=20 correspond to dependent value of 100.
Without removing duplicates when making a prediction for a new observation with A=10 and B=20, Random Forest will give roughly the average of 51 values mentioned above, which is close to 6.86. If you remove duplicates you will get an average of 5 and 100 or 52.5.
And assuming your test data has the same distribution as the original data, the model will be far off on many observations. Therefore, unless you have a good reason to believe that the test data will have a different distribution, don't remove the duplicate values. |
H: Imputing missing value based on filtering result of another column
C1
C2
A
x
A
y
A
z
A
x
A
NaN
A
x
A
x
A
x
B
y
B
y
B
z
B
y
B
NaN
B
y
B
x
B
x
I have to impute missing values in C2 , the imputation should be such that if the missing values corresponding C1 column is A, then filter the datset by A and find the mode of C2 and replace the missing value, similarly for B.
In the above example it should be x for first missing value and y for second missing value
C1
C2
A
x
A
y
A
z
A
x
A
x
A
x
A
x
A
x
B
y
B
y
B
z
B
y
B
y
B
y
B
x
B
x
Tried this, not sure if this is the best way
for i in df['c1'].unique():
df[(df['c1']==i) & (df['c2'].isnull())]['c2'] = df[df['c1']==i]['c2'].mode()[0]
AI: You can use pandas groupby and transform methods. This maps the groupwise mode to the index of the original dataframe.
df['C2']=df.groupby('C1').transform(lambda x: x.fillna(x.value_counts().index[0]))['C2']
df
The output is: |
H: What issue is there, when training this network with gradient descent?
Suppose we have the following fully connected network made of perceptrons with a sign function as the activation unit, what issue arises, when trying to train this network with gradient descent?
AI: what issue arises, when trying to train this network with gradient descent?
The activation function is sign function or signum function (A little modified).
So, its Derivative will be 0 at all the points
Hence, the Gradient descent won’t be able to make progress in updating the weights and backpropagation will fail. |
H: Why do my target labels need to begin at 0 for sparse categorical cross entropy to work?
I'm following a guide here to implement image segmentation in Keras.
One thing I'm confused about are these lines:
# Ground truth labels are 1, 2, 3. Subtract one to make them 0, 1, 2:
y[j] -= 1
The ground truth targets are .png files with either 1,2 or 3 in a particular pixel position to indicate the following:
Pixel Annotations: 1: Foreground 2:Background 3: Not classified
When I remove this -1, my sparse_categorical_crossentropy values come out as nan during training.
Epoch 1/15
25/196 [==>...........................] - ETA: 27s - loss: nan - accuracy: 0.0348 - sparse_categorical_crossentropy: nan
Why is this the case? If the possible integer values are 1, 2, 3, why would I need to alter them to begin at 0 to be correctly used?
If I include the -1, the training looks correct:
Epoch 1/15
196/196 [==============================] - 331s 2s/step - loss: 2.0959 - accuracy: 0.6280 - sparse_categorical_crossentropy: 2.0959 - val_loss: 1.9682 - val_accuracy: 0.5749 - val_sparse_categorical_crossentropy: 1.9682
AI: Have a look at this stackoverflow answer, it seems to be caused by that the fact that your labels need to be zero indexed as the argmax function also return the index based on a zero indexed array. |
H: How do you search for content, not words?
Given a string that describes a situation or method, are there algorithms that create fingerprints out of it, compare it with a corpus to then point to pages where a similiar concept is being discussed?
The simple form is word search.
You search for word X and X appears in a text whose excerpt is shown to you.
This is the next step.
You describe X without writing "X" and you get text excerpts as a result where X is also described using different phrases (or even words) with or without an explicit mention of "X".
How do you search for content, not words?
Hint: I am looking for technical terms for this problem to find research papers.
AI: The general principle about finding text based on meaning similarity is called distributional semantics. The main idea is that words with a similar meaning statistically tend to co-occur with the same context words. This is the basis for many standard tasks like topic modeling, word sense disambiguation/induction, and generally for everything related to semantics.
The problem you're describing is essentially information retrieval: given a query expressed as one or several words, find the "documents" in a corpus which are semantically similar to this query.
[edit] Depending what the data looks like, you might be interested in the more specific task of semantic textual similarity. |
H: Is my CNN model overfitting?
I'm training a standard CNN. Attached my training curve. My model:
Model: "functional_35"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_18 (InputLayer) [(None, 120, 120, 3)] 0
_________________________________________________________________
conv2d_28 (Conv2D) (None, 118, 118, 32) 896
_________________________________________________________________
max_pooling2d_28 (MaxPooling (None, 59, 59, 32) 0
_________________________________________________________________
dropout_23 (Dropout) (None, 59, 59, 32) 0
_________________________________________________________________
flatten_17 (Flatten) (None, 111392) 0
_________________________________________________________________
dense_17 (Dense) (None, 256) 28516608
_________________________________________________________________
visualized_layer (Dense) (None, 1) 257
=================================================================
Total params: 28,517,761
Trainable params: 28,517,761
Non-trainable params: 0
_________________________________________________________________
My data dimension:
Is my model overfitting, if so, what's my best strategy now?
AI: Your train/validation loss curves are a classic example of overfitting.
It looks like you have 1425 data samples to train a model with > 28 million parameters.
I would suggest trying any/all of the follow:
use a smaller model -> less parameters means less complexity and less ability to overfit
data augmentation -> more data samples means more variation in the dataset for the model to capture
early-stopping -> use something like the Keras callback, which will stop the model training once the validation loss doesn't decrease for a number of epochs
If you happen to be using image data, you might take a look at the Keras ImageDataGenerator, which can do things like flip/rotate your images. |
H: Adjusting imbalance in classification problem reduce precision, accuracy but increase recall
I've learned that adjusting imbalanced data when training a CNN affects model performance which got me thinking "what about in ML?" so I've done some testing on my own, you can check it out here -> https://haneulkim.medium.com/handling-imbalanced-class-in-machine-learning-classifiers-1b5c528f427f
I've ended up concluding that as we balance our data using random oversampling our precision and accuracy decrease. I want to know why this is the case...
If it requires me to do more research, link to resource would be greatly appreciated. Thanks!
AI: Intuitively the problem of imbalanced data can be understood like this: if a classifier is not really sure how to classify an instance but it knows that most instances belong to class X, then whenever there's a doubt predicting class X is always the best decision. As a consequence, the classifier unavoidably assigns class X too often since all the "unsure" cases end up being labeled as the majority class X.
So first it's important to understand that resampling is not automatically the "cure" for imbalanced data:
Resampling doesn't provide the classifier with more information, it just presents it in a different way in order to force the classifier to pay more attention to the minority class.
In case the data is easy to separate, the classifier can do a perfectly good job without resampling. This means that whenever possible it's better to improve the features rather than using resampling, because that's what can actually help the classifier doing its job.
This being said, resampling is a useful technique in some cases. Assuming the standard choice of the minority class as the positive class, resampling will only increase recall at the expense of precision: as said above, the difference happens when the classifier cannot easily predict the class for an instance. In such a case it has two choices:
Assigning the negative majority class: more likely to be correct (True Negative), a small risk of False Negative error.
Assigning the positive minority class: more likely to be incorrect (False Positive), a small chance to be correct (True Positive)
Without resampling the classifier favors the first option, so it has few FP errors but quite a lot of FN errors. Therefore it can have quite high precision but low recall.
With resampling the classes are equal, so the classifier stops favoring the first option. Therefore it makes less FN errors but more FP errrors, which means that it increases recall at the expense of precision. |
H: is it possible to decide model without any data?
Today I just faced a very unique demand from my superior. He asked me whether I can make a model first before we gather the data for training because we don't have any data yet.
I was utterly confused about what to do with this. Did anyone have any suggestion how should I approach modelling without any data at all? Thanks
AI: This is not a very strange situation in real world companies who nowadays want to build data science applications and other data related stuff, but without enough historical data (or none at all).
In this case, defining what a model is might help you, so:
are you/your superior considering only machine learning models? In that case, you need the data to train with
are you also considering a less sophisticated approach like rules-based models first? In that case, you can generate (i.e. programm it) such rules based on business knowledge before going directly into the machine learning pipeline, for which you need data
Another option, which I used once to check some ideas in advance (before having data) is to simulate some data based on the known data distributions which you know you might have in a near future; for instance, you might want to simulate clients ages, clientes accounts amounts... from other banks stored in some open-data platform.
In this case, you can model your data with for instance a kernel density estimator, to generate afterwards some synthetic samples. Below you can find what I onced made with a similar situation, where the orange bars are the open data retrieved with variables similar to what I would eventually have in my company (in this case I needed ages for each marital status, and I found it for a bank of a similar country) and used to generate the kernel density data generator (blue line): |
H: Implementing U-Net segmentation model without padding
I'm trying to implement the U-Net CNN as per the published paper here.
I've followed the paper architecture as closely as possible but I'm hitting an error when trying to carry out the first concatenation:
From the diagram, it appears the 8th Conv2D should be merged with result of the 1st UpSampling2D operation, however the Concatenate() operation throws an exception that the shapes don't match:
def model(image_size = (572, 572) + (1,)):
# Input / Output layers
input_layer = Input(shape=(image_size), 32)
""" Begin Downsampling """
# Block 1
conv_1 = Conv2D(64, 3, activation = 'relu')(input_layer)
conv_2 = Conv2D(64, 3, activation = 'relu')(conv_1)
max_pool_1 = MaxPool2D(strides=2)(conv_2)
# Block 2
conv_3 = Conv2D(128, 3, activation = 'relu')(max_pool_1)
conv_4 = Conv2D(128, 3, activation = 'relu')(conv_3)
max_pool_2 = MaxPool2D(strides=2)(conv_4)
# Block 3
conv_5 = Conv2D(256, 3, activation = 'relu')(max_pool_2)
conv_6 = Conv2D(256, 3, activation = 'relu')(conv_5)
max_pool_3 = MaxPool2D(strides=2)(conv_6)
# Block 4
conv_7 = Conv2D(512, 3, activation = 'relu')(max_pool_3)
conv_8 = Conv2D(512, 3, activation = 'relu')(conv_7)
max_pool_4 = MaxPool2D(strides=2)(conv_8)
""" Begin Upsampling """
# Block 5
conv_9 = Conv2D(1024, 3, activation = 'relu')(max_pool_4)
conv_10 = Conv2D(1024, 3, activation = 'relu')(conv_9)
upsample_1 = UpSampling2D()(conv_10)
# Connect layers
merge_1 = Concatenate()([conv_8, upsample_1])
Error:
Exception has occurred: ValueError
A `Concatenate` layer requires inputs with matching shapes except for the concat axis. Got inputs shapes: [(32, 64, 64, 512), (32, 56, 56, 1024)]
Note that the values 64 and 56 correctly line up with the architecture.
I don't understand how to implement the model as it is in the paper. If I change my code to accept an image of shape (256, 256) and add padding='same' to the Conv2D layers, the code works as the sizes are aligned.
This seems to go against what the authors specifically state in their implementation:
Could somebody point me in the right direction on the correct implementation of this model?
AI: $\hspace{3cm}$
If we follow the definition of each arrow.
Gray => Copy and Crop
Every step in the expansive path consists of an upsampling of the
feature map followed by a 2x2 convolution (“up-convolution”) that halves the
number of feature channels, a concatenation with the correspondingly cropped
feature map from the contracting path, and two 3x3 convolutions, each followed by a ReLU. The cropping is necessary due to the loss of border pixels in
every convolution. Paper
So, believe(I have added 3 coloured circles)
Blue - 28x28 is upsampled and become 56x56, 1024 is halved to 512
Red - 64x64 is cropped to 56x56. Then Concatenated along FM axis.
Black - 3x3 convolutions, followed by a ReLU |
H: Has anyone heard of a model similar to a random forest which fits a linear regression model in its leaf nodes?
That is, each leaf node in each decision tree learns a linear model.
Anyone heard of this kind of model? Even better, anyone know of implementations?
AI: M5P Model Trees are the closest thing that I'm aware of. In these trees, the leaf nodes are linear models. The difference is that the leaf nodes learn a multi-dimensional linear model instead of single-dimensional.
(If you had a single-dimensional model at the leaf, how would you know which feature to use in the regression? What is the advantage over multi-dimensional?)
Of course a model tree is just a single tree, not a forest. But you could easily build a random forest composed of model trees. |
H: Hot to use the formula model in t.test
I am trying to better understand the formula model for two sample t-tests in R. When I calculate the test in the formula model I get a wrong result.
set.seed(41)
df = data.frame(x1=c(rep(1, 10), rep(0, 10))+ rnorm(20, mean = 0, sd = 0.1),
x2=c(rep(0, 10), rep(1, 10)))
t.test(x1 ~ x2, data=df)
Output
Welch Two Sample t-test
data: x1 by x2
t = 22.365, df = 17.85, p-value = 1.668e-14
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
0.9247087 1.1165780
sample estimates:
mean in group 0 mean in group 1
1.0530115 0.0323681
If I use the variable model, I get the expected result.
t.test(x = df$x1, y = df$x2)
Output
Welch Two Sample t-test
data: df$x1 and df$x2
t = 0.2581, df = 37.945, p-value = 0.7977
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.2921655 0.3775450
sample estimates:
mean of x mean of y
0.5426898 0.5000000
```
AI: Which result is right or wrong here depends on your objective. You have created two variables (vectors) $x_1, x_2$.
Assuming $x_1, x_2$ are two samples of i.i.d random variables $X_1$ and $X_2$, respectively. Now, with some more assumptions, you want to test the null hypothesis: $\mathbb E(X_1) = \mathbb E(X_2)$.
For this, your second output is the correct one. However, based on the data that you have generated, this is not applicable because each of your samples, $x_1, x_2$ are not coming from the same distribution, as the mean of your first five values is different from the last five.
Ignoring your data, this analysis can be done using the formula approach as well. Join the two vectors $x_1,x_2$ to $x$ and add another column (say, y) which identifies which data point is coming from which sample. Call this new data frame df1. Then an equivalent way of doing the above mentioned test is t.test(x~y, data = df1)
The second approach is helpful when your data is organized in such a format. For example, say, you have data frame with two columns: height ($x$) and gender ($y$). Then running t.test(x~y, data = df1) will test whether the mean height is different between genders.
Your first approach can be considered right only when your $x_2$ is a factor variable which identifies the group or sample of the data point in vector $x_1$. |
H: Resampling : My dataset is categorical or numerical?
I have a dataset with 203 variables. Like age>40 (0 -yes, 1-no), gender(0 or 1), used or not 200 types of drugs (one hot encoded into 200 variables), and one target variable (0 or 1). This is an imbalanced dataset where Counter({0: 5607, 1: 1717}).
May I know what kind of resampling strategy I should adopt for this kind of dataset?
Is this dataset considered as numerical or categorical datset?
I tried random under sampling and over sampling, but not satisfied with the ROC curve obtained after modeling.
Can I apply SMOTE considering this as numerical dataset?
I read in this , that In case the dataset only contains categorical variables, the Hamming distance is applied for resampling purpose and If the dataset only contains numerical variables, it is possible to apply traditional distances such as Euclidean, Manhattan or Minkowski.
In case of my dataset, is it okay to apply Euclidean distance for resampling? Could you please direct me to some sources showing how this is done for a datset with only binary values?
AI: For purely categorical data like yours Chewla et al. proposed SMOTE-N in the original paper. SMOTE-N is implemented in imbalanced learn and the user guide describes the differences compared to vanilla SMOTE and SMOTE-NC as follows:
If data are made of only categorical data, one can use the SMOTEN variant [CBHK02]. The algorithm changes in two ways:
the nearest neighbors search does not rely on the Euclidean distance. Indeed, the value difference metric (VDM) also implemented in the class ValueDifferenceMetric is used.
a new sample is generated where each feature value corresponds to the most common category seen in the neighbors samples belonging to the same class.
Also note that your dataset is not extremely imbalanced so depending on the differences in misclassification cost of your classes you may or may not benefit from sampling techniques.
As a side note: ROC Curves can have some caveats when used on imbalanced datasets see, for example, "The Relationship Between Precision-Recall and ROC Curves" and "The Precision-Recall Plot Is More Informative than the ROC Plot When Evaluating Binary Classifiers on Imbalanced Datasets". |
H: Is feature importance from classification a good way to select features for clustering?
I have a large data set with many features (70). By doing preprocessing (removing features with too many missing values and those that are not correlated with the binary target variable) I have arrived at 15 features. I am now using a decision tree to perform classification with respect to these 15 features and the binary target variable so I can obtain feature importance. Then, I would choose features with high importance to use as an input for my clustering algorithm. Does using feature importance in this context make any sense?
AI: It might make sense, but it depends what you're trying to do:
If the goal is to predict the binary target for any instance, a classifier will perform much better.
If the goal is to group instances by their similarity, loosely taking the binary target into account indirectly, then clustering in this way makes sense. This would correspond to a more exploratory task where the goal is to discover patterns in the data, focusing on the features which are good indicators of the target (it depends how good they actually are). |
H: sklearn package with AttributeError: 'MissingValues' object has no attribute 'to_list'
I am currently trying to reproduce this tutorial on building a CNN based time series classifier for human activity recognition.
My setup is:
Windows 10, Pycharm IDE with a new project for this tutorial, Python3.6, freshly installed the needed packages.
For reproducing, you need to download the activity data here and place it in the project directory under ./Data
The code executes the graphs well until this position:
df[LABEL] = le.fit_transform(df["activity"].values.ravel())
and throws following error:
Traceback (most recent call last):
File "C:/Users/bobin/PycharmProjects/Mussel/cnn_musseltest.py", line 226, in <module>
df[LABEL] = le.fit_transform(df["activity"].values.ravel())
File "C:\Users\bobin\PycharmProjects\Mussel\venv\lib\site-packages\sklearn\preprocessing\_label.py", line 117, in fit_transform
self.classes_, y = _unique(y, return_inverse=True)
File "C:\Users\bobin\PycharmProjects\Mussel\venv\lib\site-packages\sklearn\utils\_encode.py", line 31, in _unique
return _unique_python(values, return_inverse=return_inverse)
File "C:\Users\bobin\PycharmProjects\Mussel\venv\lib\site-packages\sklearn\utils\_encode.py", line 133, in _unique_python
uniques.extend(missing_values.to_list())
AttributeError: 'MissingValues' object has no attribute 'to_list'
Related threads that have not helped me so far:
Link1
Link2
AI: Not sure what version of the scikit-learn package you are using, but the following works without issues using version 0.24.1:
import pandas as pd
import numpy as np
import sklearn
from sklearn.preprocessing import LabelEncoder
print(sklearn.__version__)
# '0.24.1'
def read_data(file_path):
column_names = ['user-id', 'activity', 'timestamp', 'x-axis', 'y-axis', 'z-axis']
df = pd.read_csv(file_path, header=None, names=column_names)
df['z-axis'].replace(regex=True, inplace=True, to_replace=r';', value=r'')
df['z-axis'] = df['z-axis'].apply(convert_to_float)
df.dropna(axis=0, how='any', inplace=True)
return df
def convert_to_float(x):
try:
return np.float(x)
except:
return np.nan
df = read_data("WISDM_ar_v1.1//WISDM_ar_v1.1_raw.txt")
LABEL = "ActivityEncoded"
le = LabelEncoder()
df[LABEL] = le.fit_transform(df["activity"].values.ravel())
print(df.head())
# user-id activity timestamp x-axis y-axis z-axis ActivityEncoded
# 33 Jogging 49105962326000 -0.694638 12.680544 0.503953 1
# 33 Jogging 49106062271000 5.012288 11.264028 0.953424 1
# 33 Jogging 49106112167000 4.903325 10.882658 -0.081722 1
# 33 Jogging 49106222305000 -0.612916 18.496431 3.023717 1
# 33 Jogging 49106332290000 -1.184970 12.108489 7.205164 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.