Id
stringlengths 2
6
| PostTypeId
stringclasses 1
value | AcceptedAnswerId
stringlengths 2
6
| ParentId
stringclasses 0
values | Score
stringlengths 1
3
| ViewCount
stringlengths 1
6
| Body
stringlengths 34
27.1k
| Title
stringlengths 15
150
| ContentLicense
stringclasses 2
values | FavoriteCount
stringclasses 1
value | CreationDate
stringlengths 23
23
| LastActivityDate
stringlengths 23
23
| LastEditDate
stringlengths 23
23
⌀ | LastEditorUserId
stringlengths 2
6
⌀ | OwnerUserId
stringlengths 2
6
⌀ | Tags
listlengths 1
5
| Answer
stringlengths 32
27.2k
| SimilarQuestion
stringlengths 15
150
| SimilarQuestionAnswer
stringlengths 44
22.3k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
112642
|
1
|
112645
| null |
2
|
261
|
Here I am using Xgboost for classification for a simple small dataset, when x = 0 then y = 1 elif x = 1 then y = 0. Then I use the xgb.XGBClassifier() but the resulting probability is just 0.5. I wonder why this happens. [](https://i.stack.imgur.com/6Qk73.png)
|
Why XGboost does not work on small dataset
|
CC BY-SA 4.0
| null |
2022-07-14T12:54:51.800
|
2022-07-14T13:28:15.577
| null | null |
130230
|
[
"scikit-learn",
"xgboost"
] |
There's too few different samples so XGBoost is unable to split the trees properly (you can check the actual trees using `clf.get_booster().get_dump()`). Reducing the `min_child_weight` hyperparameter (e.g. `clf = xgb.XGBClassifier(min_child_weight=0.5)`) should get you some traction.
|
Can I use xgboost on a dataset with 1000 rows for classification problem?
|
Yes, XGBoost is famous for having been demonstrated to attain very good results using small datasets often with less than 1000 instances.
---
Of course when choosing a machine learning model to fit your data, the number of instances is important and is related to the number of model parameters you will need to fit. The greater the number of parameters in the model the more data you will need to reduce the bias of your final model. If you do get good results using a complex model on very few instances then there is a high probability that you are overfitting. For example 1000 instances is hardly enough to fit a deep neural network.
That being said, the distribution of your classes and the noise in the data is always going to be a limiting factor to how well any model you select will fit your data.
|
112665
|
1
|
112682
| null |
2
|
267
|
I'm trying to perform binary classification on a very small dataset, consisting of 3 negative samples and 36 positive samples. I've been testing different models from scikit-learn (logistic regression, random forest, svc, mlp). Depending on random_state when using train_test_split, the train or test set might not have a negative sample in it and classification performance is poor because of this. I've read into oversampling techniques using ROSE or various flavors of SMOTE, but have also read that oversampling will lead to overfitting or does not increase performance. I had experimented with oversampling the training set and depending upon how the data is split into train/test the different models are each able to correctly classify unseen data (except for log reg). However, because of the possibility of overfitting due to oversampling I am unsure of the model's actual ability to perform on unseen data.
When not oversampling and just performing feature selection, tuning hyperparameters (e.g., class weights), and using LOOCV the models (not log reg) are able to correctly classify each sample as negative or positive. However, I have read that LOOCV tends to have high variance and I am unsure of how the classifiers would perform on new unseen data.
Unfortunately collecting more data is not possible, I have to work with what I currently have. My question is how do I approach the problem to achieve the best performance I can without overfitting the classification models? Having someone classified falsely as negative is preferable to having something classified falsely as positive. If the models are able to correctly classify everything when performing LOOCV is that the last step in the process before model deployment, or are there other things I should look into as well?
|
Binary Classification with Very Small Dataset (<40 samples)
|
CC BY-SA 4.0
| null |
2022-07-14T22:00:07.823
|
2022-07-15T17:25:47.363
| null | null |
138112
|
[
"machine-learning",
"scikit-learn",
"binary-classification"
] |
I'm not sure this will be a comprehensive answer but an opinion to give a push to the reasoning. There are only 3 negative cases. I could create a custom cross validation scheme: create a test case with one negative case and the rest 2 of them to put to the train set. Then iterate through the negative cases, giving a chance to everyone to be in the test set. Any test set I would enrich with positive cases keep the ratio of positive/negative cases fixed: 36/3 * 1 = 12 positive observations in each test set.
I'm not sure this technic would work in any way but at least this can be a solution for the CV scheme problem.
I would definitely be prepared that the problem doesn't have any adequete solution with so poor data. I stress this idea in order to make a reasonable expectation on time, money budget as well as risks of the project.
I'm not sure it's reasonable to do so intensive overfitting. You may approximately count how many times you used any specific negative class observation in order to get a feeling of the degree of contamination of your model with overfitting. This is not strict or correct terminology, I just want to share my intuition. Each time you use the negative observation for training or assessment you increase you changes that a good model will fool you eventually. You have high risks with so few examples.
You may treat the problem as an anomaly detection problem. Split a observations in train-test set. For example, 10 observations in the test set, 3 of them are negative cases. Train a clustering model then look how it works on the test set. Does it group negative cases in one separate group or not? [https://scikit-learn.org/stable/modules/clustering.html](https://scikit-learn.org/stable/modules/clustering.html)
Another approach is to add your own knowledge of the world to the problem if this is applicable to your case. For example, imagine we have titanic data, only 39 observations with 3 survivors and 2 columns: Name and Survived. I could suggest that gender is important and create a new column based on my world knowledge. Looks like I'm reinventing the wheel and feature engineering but anyway this may be useful for you.
The last point is that when you have so few data use data visualization intensively. Make your own, maybe even hardcoded if-else model, based on the plots where you painting data points by target (color=target). This could be more reliable and less prone to overfitting comparing with CV and complex models.
|
How should classification be done for a very small data set?
|
I don't think you need some classification algorithm, you can use your basic understanding on data/ Business Knowledge to do the classification. As the number of data points are too low, the model cannot give you good/generalised results.
Even if you try applying some complex algorithm like SVM/NN, it is of no use as the data is too low.
If you still want to apply some machine learning algorithm and then you can apply Naive Bayes, Decision Tree as these are the basic algorithms, can do the job.
|
112684
|
1
|
112691
| null |
0
|
116
|
Will the spacy V3 model get affected by imbalanced entities?
I have got a dataset annotated in spacy format and if I look into my custom entities the rations are different for different entities. For example, one entity say 'flex' is more than 2500 but I also have an entity say 'door' which is just 21. I trained my spacy model and evaluated using `spacy.evaluate(examples)`.
I'm getting f1-score of 0.64,
precision of 1.0 and
recall of 0.47.
I want to know whether this entity imbalance is affecting model performance?. If yes is there a way to solve this issue?
Any help on this will be greatly appreciated.
|
Named Entity Recognition using Spacy V3 with imbalance entities
|
CC BY-SA 4.0
| null |
2022-07-15T13:01:05.527
|
2022-07-15T17:19:23.997
| null | null |
131224
|
[
"python",
"deep-learning",
"nlp",
"named-entity-recognition",
"spacy"
] |
The imbalance between entities is unavoidable: some entities are naturally more frequent than others. It would likely cause various other biases to try to oversample real text in order to increase the number of occurrences.
The imbalance does affect performance of course, it's easier to correctly recognize a frequent entity like "New York" than for instance "Cork". But this is the statistical game, there are always going to be some errors somewhere.
Finally it's important to keep in mind that NER is not primarily meant to recognize only a finite set of predefined entities seen in the training data. On the contrary, the goal is to use clues in the text in order to capture any entity present in the text, whether it has been seen in the training set or not. So in theory the training data should provide a representative sample of the different contexts in which NEs can appear, and this should be sufficient to recognize any entity independently of their frequency. But in reality it's practically impossible to have a perfect representative sample of the contexts, of course.
|
Named entity recognition (NER) features
|
The features for a token in a NER algorithm are usually binary. i.e The feature exists or it does not. For example, a token (say the word 'hello'), is all lower case. Therefore, that is a feature for that word.
You could name the feature 'IS_ALL_LOWERCASE'.
Now, for POS tags, lets take the word 'make'. It is a verb and hence the feature "IS_VERB" is a feature for that word.
A gazetter can be used to generate features. The presence (or absence) of a word in the gazatter is a valid feature. Example: the word 'John' is present in the gazetter of Person names. so "IS_PERSON_NAME" can be a feature.
|
112701
|
1
|
112706
| null |
0
|
44
|
I have an extremely abstract and numeric data with equally abstract objective.
I have around 3000 rows of train data (`df_train`), where I have a binary target variable `target` (0 or 1), where I have 50 numerical (float) features `num1, num2, ..., num50` ranging from 0 and 1, and 50 integer features `int1, int2, ..., int50` being either -1, 0, or 1. This adds up to my data having 3000 rows and 101 columns.
My test data(`df_test`) has the same format as the train data excluding the `target` variable, having 500 rows. My objective is to classify the `target` variable based on other features in the test data.
Given that there are a lot of features, my instinct was to do a dimensionality reduction, and since the goal is to classify rather than cluster, I thought PCA would be more appropriate compared to other manifold methods such as t-SNE.
I have a couple questions regarding the designing of the solution:
- Naturally, as the number of PCA components get closer to the number of features, it explains more variability. What is the good threshold to explain the data yet still reduce the dimension?
- After fitting the PCA by scikit, how can the result play into actually classifying the target variable in my test data?
- Is there a more appropriate dimensionality reduction technique, or furthermore is it actually necessary to do a dimensionality reduction?
Any insights are greatly appreciated.
|
Classification problem with no context in numerical features
|
CC BY-SA 4.0
| null |
2022-07-16T02:44:47.530
|
2022-07-16T11:02:01.860
| null | null |
133760
|
[
"machine-learning",
"python",
"classification",
"pca",
"dimensionality-reduction"
] |
- There is no way to decide for a good threshold without more information about the data. For example, if the independent variables are highly correlated, you probably want to reduce the dimension. However, they might all be rich in information and only weakly correlated. I would not listen to people telling you that the number of predictors should be the square root of your number of observations. A good rule of thumb is: Whatever gives the best out-of-sample predictions is the best model.
- I am not familiar with PCA in scikit. That being said, the basic idea is that you apply the same normalization steps (remember that PCA assumes that the variables are distributed with mean 0 and variances 1) using the test data coefficients and then define the principle components as linear combination of your original (scaled) variables. When you've transformed both your train and your test data set, you can use any model you like on these new data sets.
- Using PCA for dimensionality reduction can improve prediction results at the cost of interpretability. In the example of a linear model, column num1 having a coefficient of 5.7 is interpreted easily. However, what does it mean if principle component 1 has a coefficient of 0.32? Again, in the example of a linear model, there are the Akaike and Bayesian information criteria (AIC and BIC) for dimensionality reduction. They keep the original variables as they are, only removing others. See https://en.wikipedia.org/wiki/Akaike_information_criterion.
|
Context classification problem
|
Spam detection can be done with many different methods, the same goes for your task. They do share the similar idea of processing a given text and classifying it to be one of 2 classes (science/not-science or spam/not-spam).
What you first need to do is to turn the articles into a vector of constant size
(for example with Word2vec which takes as its input a text and produces a vector space).
Ones you have a vector representing each article, you can start training your classifier and feature extractor (these days they are trained together).
As for determining which machine learning approach to take, you can try first using an SVM, it will probably be good enough.
You can follow one of the following tutorials (there are many more), just replace their dataset with yours :
[Email Spam Filtering: An Implementation with Python and Scikit-learn](https://www.kdnuggets.com/2017/03/email-spam-filtering-an-implementation-with-python-and-scikit-learn.html)
[Spam Classifier in Python from scratch](https://towardsdatascience.com/spam-classifier-in-python-from-scratch-27a98ddd8e73)
|
112736
|
1
|
121080
| null |
1
|
142
|
I am building my a custom Gym environment and so far everything worked well following the guides spread all over the internet. However, I am now in a phase when frequent changes to the environment class (inheriting gym.Env) are happening and need to be tested. After the latest coding changes to `reset()` and `step()` method, it turns out that now `gym.make()` returns an environment object which executes old code which was valid before.
So my question is: How to overcome that issue?
I am not a friend of increasing the version id to e.g. v1, as the originating v0 is still under development.
The `__init__.py` file on package level contains that:
```
from importlib.metadata import version
from gym.envs.registration import register
__version__ = version("wksim")
register(id="WkEnv-v0", entry_point="wksim.wkenv:WkEnv")
```
Note: class WkEnv is located in the file wkenv.py
In the following executable file I am testing the following way
```
import gym
from wksim.wkenv import WkEnv
if __name__ == "__main__":
env_config = {
"cli_mode": True,
"cache_trace": True,
"instance": "minimal",
"UNIFORM_TIME_SLOT_LENGTH": 0.05,
}
env = gym.make("WkEnv-v0", config=env_config)
initial_state = env.reset()
```
Note: I learnt that the custom environment class must be imported, otherwise gym.make() will not find the env id.
|
OpenAI Gym: gym.make() does not refer to updated Env code
|
CC BY-SA 4.0
| null |
2022-07-17T13:40:42.693
|
2023-04-21T17:25:53.027
| null | null |
100453
|
[
"python",
"reinforcement-learning",
"openai-gym"
] |
This topic was solved even with `gym` 0.21.0, but also now after the upgrade to `gymnasium`. The root cause was sitting in the way of how the overall package was installed. Using `pip install -e` solved the point.
|
How to create custom action space in openai.gym
|
In the case of a 1D observation space, it could be something like:
```
self.observation_shape = (24, 1, 3)
self.observation_space = spaces.Box(low = np.zeros(self.observation_shape), high = np.ones(self.observation_shape),dtype = np.float16)
self.action_space = spaces.Discrete(3,)
```
See also: [https://blog.paperspace.com/creating-custom-environments-openai-gym/](https://blog.paperspace.com/creating-custom-environments-openai-gym/)
|
112758
|
1
|
112783
| null |
1
|
84
|
First of all I'm asking that because of this [tutorial](https://www.tensorflow.org/tutorials/images/data_augmentation).
When I heard about Data Augmentation the definition I learned was something like: "It's a technique where we create more data to our dataset transforming some samples of our current dataset (transformations like rotations, flips, brightness etc).".
But in that tutorial they're just overwriting the current dataset with transformed samples not adding new data ... Or I'm wrong ?
The correct way of doing that wouldn't be get N random samples, transform them and add them to the dataset ?
|
How to do Data Augmentation efficiently in Tensorflow 2?
|
CC BY-SA 4.0
| null |
2022-07-18T11:23:11.200
|
2022-07-19T07:15:46.210
| null | null |
137914
|
[
"tensorflow",
"data-augmentation"
] |
For most frameworks, random augmentation includes no augmentation (random flipping may either flip or not, random rotation angle can be 0 or nigh). This is also randomized every epoch (or whenever your dataset entry is otherwise sampled). Thus the model should eventually see the original image and lots of its possible augmentations.
|
Data augmentation in deep learning
|
- 1 and 2: If you rescale you images, you should do it on all partitions: training, validation and test. If you only rescale your images on the training set, then your network will see very different values (0~255, vs 0.0~1.0) on validation/test set and therefore give poor accuracy. That's your case 2.
- I don't see any obvious problem.
|
112786
|
1
|
112787
| null |
0
|
51
|
I am comparing the classification accuracy between Naive Bayes (NBC), SVM and a Neural Network. I am using a Dataset of ~18K and 26 Labels.
In the current state the Neural Network get always an accuracy of >80%, but the NBC and SVM fluctuate between 15% and 80%. They mostly end up near one of the two extrema.
The only difference for each run is the splitting of the data in Training/Testing with the model_selection.train_test_split() function of sklearn
For the implementation of the classifiers I am also using the classes and functions of sklearn.
[](https://i.stack.imgur.com/2qywY.jpg)
[](https://i.stack.imgur.com/xGq8W.jpg)
I highly suspect the problem in my data but I am already doing the basic preprocessing with stop words, lowercasing, etc.
|
Fluctuating accuracy for Naive Bayes Classifier and SVM
|
CC BY-SA 4.0
| null |
2022-07-19T08:56:55.837
|
2022-07-19T09:04:21.713
| null | null |
138281
|
[
"machine-learning",
"classification",
"svm",
"accuracy",
"naive-bayes-classifier"
] |
I recommend you to use the "stratify" attribute of the train_test_split function in order to have a good distribution of the classes (this will avoid the case where there is a support of 0 on the class number 25).
Finally, if you see a big variance depending on the dataset, I think a cross validation is interesting.
|
Suspiciously low False Positive rate with Naive Bayes Classifier?
|
- I find the easiest way for people to understand this is to think of the confusion matrix. Accuracy score is just one measure of a confusion matrix, namely all the correct classifications over all the prediction data at large:
$$\frac{True Positives + True Negatives}{True Positives + True Negatives + False Positives + False Negatives}$$
Your False Negative Rate is calculated by:
$$\frac{False Negatives}{False Negatives + True Positives}$$
One model may turn out to have a worse accuracy, but a better False Negative Rate. For example, your model with worse accuracy may in fact have many False Positives but few False Negatives, leading to a lower False Negative Rate. You need to choose the model which produces the most value for your specific use case.
- Why do some classifier perform poorly? While an experience practitioner might surmise what could be a good modeling approach for a dataset, the truth is that for all datasets, there is no free lunch... also known as "The Lack of A Priori Distinctions Between Learning Algorithms" You don't know ahead of time if the best approach will be deep learning, gradient boosting, linear approaches, or any other number of models you could build.
|
112791
|
1
|
112794
| null |
0
|
91
|
I am currently trying to build a text classifier and I am experimenting with different settings. Specifically, I am extracting my features with a `CountVectorizer` and `HashingVectorizer`:
```
from sklearn.feature_extraction.text import CountVectorizer, HashingVectorizer
# Using the count vectorizer.
count_vectorizer = CountVectorizer(lowercase=False, ngram_range=(1, 2))
X_train_count_vectorizer = count_vectorizer.fit_transform(X_train['text_combined'])
X_dev_count_vectorizer = count_vectorizer.transform(X_dev['text_combined'])
# Using the has vectorizer.
hash_vectorizer = HashingVectorizer(n_features=2**16,lowercase=True, ngram_range=(1, 2))
X_train_hash_vectorizer = hash_vectorizer.fit_transform(X_train['text_combined'])
X_dev_hash_vectorizer = hash_vectorizer.transform(X_dev['text_combined'])
```
Then I am using a LinearSVC classifier
```
from sklearn.svm import LinearSVC
# Testing with CountVectorizer.
clf_count = LinearSVC(random_state=0)
clf_count.fit(X_train_count_vectorizer, y_train)
y_pred = clf_count.predict(X_dev_count_vectorizer)
accuracy_score(y_dev, y_pred)
# Testing with HasingVectorizer.
clf_count = LinearSVC(random_state=0)
clf_count.fit(X_train_hash_vectorizer, y_train)
y_pred = clf_count.predict(X_dev_hash_vectorizer)
accuracy_score(y_dev, y_pred)
```
I obtained the following results:
| |Time to train |Accuracy |
||-------------|--------|
|CountVectorizer |59.9 seconds |83.97% |
|HashingVectorizer |6.21 seconds |84.92% |
Please note that even when limiting the number of features of the CountVectorizer to 2**18, I still get slower training and inferior reults.
My questions:
- Why is training with CountVectorizer slower even for a similar number of features?
- What could explain the performance gain in terms of training time?
- Any intuition on the reasons behind the accuracy gain?
For my particular case, I have also trained a `TfidfVectorizer` and the CountVectorizer worked a bit better. If the HashingVectorizer has such significant advantages in certain cases. I am wondering why the HashingVectorizer usage is not more widely introduced in different NLP tutorials?
|
LinearSVC training time with CountVectorizer and HashingVectorizer
|
CC BY-SA 4.0
| null |
2022-07-19T10:52:58.340
|
2022-07-19T12:44:04.373
|
2022-07-19T11:01:12.070
|
39236
|
39236
|
[
"nlp",
"scikit-learn",
"text-classification"
] |
Your `lowercase` setting is different for `CountVectorizer` and `HashingVectorizer`. It might have an impact.
Otherwise, they do very similar job in this case, the accuracy difference varies with the exact task but is not that huge. Disparate training speeds you observe are not related to the method itself (the feature matrix size is comparable), it's just `HashingVectorizer` normalizing the results by default, which is usually beneficial for SVC, resulting in much fewer iterations (check `clf_count.n_iter_`). Applying `sklearn.preprocessing.Normalizer()` to `CountVectorizer` results will likely make it fit equally fast.
`HashingVectorizer` is still faster and more memory efficient when doing the initial transform, which is nice for huge datasets. The main limitation is its transform not being invertible, which limits the interpretability of your model drastically (and even straight up unfitting for many other NLP tasks).
|
How to interpret Hashingvectorizer representation?
|
Basic Background
- Imagine the process of count vectorizer: you first create a vocabulary which maps each word (or n-gram) to an integer index (index in the document term matrix). Then, for each document, you count number of times a word appears and set that value at appropriate index to build vector representation for the document.
- This can potentially create a very large number of features since each n-gram/token is one feature.
- Even if you want to limit the total number of features by using some trick like top-N words by occurrence, you still need to calculate and hold in memory the map of all word-counts. This can be potentially prohibitive in some applications.
- Similar problem happens for TfIDf, where you additionally store the mapping of word to document occurance for calculating the IDf part.
- Either way, you are doing multiple passes over the data and/or potentially large amount of memory consumption.
- The problem is also with bounds or predictability: you do not know the potential memory usage upfront in first phase.
- Hashing vectorizer can build document representation for all documents in one single pass over the data and still keep memory bounded (not necessarily small, the size depends on size of hash-table).
- In a single pass, you calculate hash of a token. Based on the hash value, you increment the count of particular index in the hash-table (the array underlying the hash table implementation). You get representation of current document without looking at every other document in the corpus.
- This gives rise to problem with representation accuracy. Two different tokens may have hash collision.
So you are in effect trading [representation accuracy and explanatory power] Vs. [space (bounded predictable memory usage) and time (no multiple passes on the data)].
Answers to your specific questions
- There is a (sort) correlation between input words and features: through the hash function. But this correlation is potentially defective (hash-collisions) and there's no inverse transformation (you can't say what word is represented by feature number 207).
- There's no fit and transform. For a fixed hash-function, no dataset specific learning is happening (ala word2vec).
- There's no semantic interpretation of the distance. Two words semantically similar words may not be close to each other in the representation. As long as two almost (syntactically, based on tokens) similar documents are close enough, it will work on text classification.
Why Would It Work?
Given these information, you are right in being skeptical: why on earth this should work? The answer is empirical: a randomized representation like hashing works reasonably well in practice (the benefits from exact count based representation are not that great). There might be some theoretical explanation too but I don't know it enough. If curious, you can probably read up [this paper](https://arxiv.org/pdf/0902.2206.pdf).
|
112816
|
1
|
112838
| null |
0
|
28
|
What is the effect of the tokens that the model has if model A has 1B tokens and the other model has 12B tokens? Will that have an effect on the performance?
|
What is the effect of the tokens?
|
CC BY-SA 4.0
| null |
2022-07-20T01:53:09.080
|
2022-07-20T22:39:56.727
|
2022-07-20T22:16:07.033
|
29169
|
122972
|
[
"nlp",
"tokenization"
] |
The question is not precise enough, it depends on other factors: in general, a larger training set tends to lead to a better model. However it depends if the training set is really relevant and useful for the task. For example:
- adding the larger dataset contains data from a different domain than the target task, the additional data might be useless
- if the data contains a lot of errors or noise, it might cause the model to perform worse
- if the larger data contains mostly duplicates, it's likely not to perform better.
So larger data is good for performance only if the additional data is actually of good quality.
|
What is purpose of the [CLS] token and why is its encoding output important?
|
CLS stands for classification and its there to represent sentence-level classification.
In short in order to make pooling scheme of BERT work this tag was introduced. I suggest reading up on this [blog](https://datasciencetoday.net/index.php/en-us/nlp/211-paper-dissected-bert-pre-training-of-deep-bidirectional-transformers-for-language-understanding-explained) where this is also covered in detail.
|
112820
|
1
|
112822
| null |
-1
|
102
|
I have the following data frame in Pandas:
```
ID rank feature
1 1 3
1 2 6
1 3 8
1 4 6
2 1 2
2 2 9
3 1 0
3 2 3
3 3 1
4 1 3
4 2 9
4 3 0
4 4 5
4 5 1
5 1 2
5 2 4
5 3 0
5 4 8
```
and I would like to delete all the rows such that the number in `ID` occurs in those rows is not equal to 4. For example, `ID` 1 occurs 4 times, `ID` 2 occurs 2 times, `ID` 3 occurs 3 times, `ID` 4 occurs 5 times and `ID` 5 occurs 4 times. So I would like to delete rows with `ID` = 2,3,4 and the output looks like:
```
ID rank feature
1 1 3
1 2 6
1 3 8
1 4 6
5 1 2
5 2 4
5 3 0
5 4 8
```
Is there any computationally efficient way to do that? Thank you so much.
|
Removing rows with total number of ID occurred not equal to a specific number in Pandas Python
|
CC BY-SA 4.0
| null |
2022-07-20T08:37:43.603
|
2022-07-20T08:47:29.620
| null | null |
138321
|
[
"pandas",
"dataframe",
"python-3.x"
] |
You can use `groupby` and `transform` to calculate the number of occurrences of each ID and then use simple filtering to get the result you're looking for:
```
import pandas as pd
df = pd.DataFrame({
"ID" : [1, 1, 1, 1, 2, 2, 3, 3, 3, 4, 4, 4, 4, 4, 5, 5, 5, 5],
"rank": [1, 2, 3, 4, 1, 2, 1, 2, 3, 1, 2, 3, 4, 5, 1, 2, 3, 4],
"feature": [3, 6, 8, 6, 2, 9, 0, 3, 1, 3, 9, 0, 5, 1, 2, 4, 0, 8]
})
(
df
# count number of occurences and select only those rows whose ID is present 4 times
.loc[lambda x: x.groupby("ID")["ID"].transform("count") == 4]
)
```
Which returns:
```
ID rank feature
1 1 3
1 2 6
1 3 8
1 4 6
5 1 2
5 2 4
5 3 0
5 4 8
```
|
Removing duplicates and keeping the last entry in pandas
|
You can see from [the documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.drop_duplicates.html) of the method that you can change the `keep` argument to be `"last"`.
In your case, as you only want to consider the values in one of your columns (`datestamp`), you must specify this in the `subset` argument. You had tried passing all column names, which is actually the default behaviour. Now we can use this (along with the correct value for the `keep` argument) to get this:
For example, a dataframe with duplicates:
```
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'datestamp': ['A0', 'A0', 'A2', 'A2'],
'B': ['B0', 'B1', 'B2', 'B3'],
'C': ['B0', 'B1', 'B2', 'B3'],
'D': ['D0', 'D1', 'D2', 'D3']},
index=[0, 1, 2, 3]).T
In [3]: df
Out[3]:
datestamp B C D
0 A0 B0 B0 D0
1 A0 B1 B1 D1
2 A2 B2 B2 D2
3 A2 B3 B3 D3
```
Now we drop duplicates, passing the correct arguments:
```
In [4]: df.drop_duplicates(subset="datestamp", keep="last")
Out[4]:
datestamp B C D
1 A0 B1 B1 D1
3 A2 B3 B3 D3
```
By comparing the values across rows 0-to-1 as well as 2-to-3, you can see that only the last values within the `datestamp` column were kept.
|
112823
|
1
|
112856
| null |
1
|
91
|
I have two data sets, containing points geometry (`X,Y`) and a recorded car exhaust parameter (let's say, `RP` value), of an area of interest (AOI). The datasets are spatially different, that is, the first data set is along side walk (X1, Y1, RP1) and the second data set (X2,Y2, RP2) is on the road center line (line split into equidistant 2 meters points).
The distance between the data along the side walk and the one on road center line is varying, at some locations, it is 3 - 6 meters and at some locations it is > 6 meters (let say, 6 - 20 meters range). This is due to the fact that this distance reflects varying road widths, lengths in a realistic, complex city landscape.
With the above data in hand, I want to fuse both data sets, considering the data along the side walk "more reliable" (thus higher weightage?), and compare the fused output with the reference data at limited locations in the AOI, to evaluate the data-fusion performance.
What is the best machine learning/data science technique to achieve the above? I am open to exploring several (or the "best candidate") technique(s) in Python, R, Matlab, for example. The focus for me is on the data fusion technique.
Ps. It is also possible to obtain information on road widths, lengths, building present or not, etc., if it is deemed "suitable" to include in the data processing.
|
What is the best machine learning technique to fuse two spatial data sets?
|
CC BY-SA 4.0
| null |
2022-07-20T08:57:16.450
|
2022-07-21T13:27:21.110
|
2022-07-20T08:58:38.390
|
138319
|
138319
|
[
"machine-learning",
"deep-learning",
"neural-network",
"data-science-model"
] |
In terms of navigation, one of the most reliable algorithms is the [Kalman Filter](http://www.bzarg.com/p/how-a-kalman-filter-works-in-pictures/) because it predicts directions according to previous points.
In your case, if you have 2 points measurement at each record, if you use a Kalman Filter, it would identify which one is the closest to the predicted value, without having outlines.
If you apply the Kalman Filter correctly, you would have a 3rd positioning value that would rectify the two others, and help you identify which one is the most reliable.
There are several libraries to achieve this:
- PyKalman
- FilterPy including experiments
- Simdkalman
Be careful to set a good noise reduction value: too much noise reduction would make trajectories too precise and subject to outlines errors and too low noise reduction would make trajectories too blurred.
A good noise reduction value should be closer to the natural trajectory, which is smooth and clear.
|
How to train a machine learning model if there is a relationship between two different data points?
|
This is a very broad question and really depends on the type of data you have and how it is distributed.
There are many types of classifiers that you can use.
[This link](http://scikit-learn.org/stable/auto_examples/classification/plot_classifier_comparison.html), from scikit-learn shows you a comparison between many algorithms that could help you choose which one to pick.
I understand you want to write the algorithm yourself, but I would recommend you looking at [scikit-learn.org](http://scikit-learn.org) and trying out different algorithms. Once you have tried and wanted to implement yourself, you can have a look at blogs, [like this one](https://skratch.valentincalomme.com/learning-units/supervised/k-nearest-neighbours/), which explains in details the intrinsic works of models, like k-nearest-neighbours for example.
|
112826
|
1
|
112832
| null |
0
|
284
|
I am reading a [paper](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3757135), where the authors assess online public sentiment in China in response tot the government's policies during Covid-19, using a Chinese BERT model. The author's objective is not only to learn whether a given online post is critical or supportive, but also learning to whom each post was directed at (e.g. CCP, local governments, health ministry, etc). To achieve this, the authors further state in pages 8 through 9, that they,"To train the classifer, we randomly sample approximately 5,000
posts from each dataset (10,541 posts in total), stratified by post creation data. This sample is used for a number of analyses, and we refer to it as the Hand-Annotated Sample."
My question here is what's the value of using human-annotated posts in combination with a BERT sentiment analysis model?
Specifically, my understanding of BERT as a technique is that it eliminates or at least minimizes the need for pre-labelling a sample of text for sentiment analysis purposes, and it's not clear to me why we still need hand-annotated text by humans even when using BERT.
|
Limitations of NLP BERT model for sentiment analysis
|
CC BY-SA 4.0
| null |
2022-07-20T13:06:43.943
|
2022-07-21T09:36:40.070
|
2022-07-20T13:27:51.140
|
137378
|
137378
|
[
"machine-learning",
"nlp",
"bert",
"text-classification"
] |
BERT is pre-trained on two generic tasks: masked language modeling and next sentence prediction. Therefore, those tasks are the only things it can do. If you want to use it for any other thing, it needs to be fine-tuned on the specific task you want it to do, and, therefore, you need training data, either coming from human annotations or from any other source you deem appropriate.
The point of fine-tuning BERT instead of training a model from scratch is that the final performance is probably going to be better with BERT. This is because the weights learned during the pre-training of BERT serve as a good starting point for the model to accomplish typical downstream NLP tasks like sentiment classification.
In the article that you referenced, the authors describe that they fine-tune [a Chinese BERT model](https://huggingface.co/hfl/chinese-bert-wwm-ext) on their human-annotated data multiple times separately:
- To classify whether a Weibo post refers to COVID-19 or not.
- To classify whether posts contained criticism or support.
- To identify posts containing criticism directed at the government or not.
- To identify posts containing support directed at the government or not.
Fine-tuning BERT usually gives better results than just training a model from scratch because BERT was trained on a very large dataset. This makes the internal text representations computed by BERT more robust to infrequent text patterns that would be hardly present in a smaller training set. Also, dictionary-based sentiment analysis tends to give worse results than fine-tuning BERT because a dictionary-based approach would hardly grasp the nuances of language, where not only does a "not" change all the meaning of a sentence, but any grammatical construction can give subtle meaning changes.
|
Bert model for document sentiment classification
|
Yes, it's perfectly fine to fine-tune BERT on sequences comprised of more than one sentence, and the standard way of using BERT for text classification is with the ouput vector at the first position.
However, take into account that the maximum length of BERT's input sequences is 512 tokens, so your documents should be short enough to fit in that.
|
112829
|
1
|
112831
| null |
0
|
24
|
Assuming that I have two features, `x` and `y` for an MLP model. I know that depending on the model, the multiplication of features can yield a better feature. For example, if `x` and `y` are the dimension of a rectangle, then the multiplication will give the area.
Assuming that `x` and `y` are the area of a room and a kitchen. `x+y` will be the total area of the apartment.
Is it recommended to create a new feature by adding features together for Machine Learning models?
|
Will summing features improve the Machine Learning models?
|
CC BY-SA 4.0
| null |
2022-07-20T14:40:28.630
|
2022-07-20T15:00:41.817
| null | null |
118557
|
[
"neural-network",
"feature-selection",
"mlp"
] |
Depending on the type of model you are using the model might be able to create these type of new features by itself, but if you have this added domain knowledge it is definitely recommended to create them yourself. For more info on this topic have a look at [feature engineering](https://en.wikipedia.org/wiki/Feature_engineering).
|
Adding extra (meaningful) features does not improve model performance
|
>
I tried with different features: in one dataset with less features engineering, i.e., using only features from Text, I got a maximum value of F1-score equal to 68%. With more features, that I thought to be significant for improving the model, I am getting max 64%, that is weird considering the problem (email classification for spam detection).
Typically this would happen if the model is overfit: not enough data and/or too many features make the model pick patterns which happen by chance in the training data.
Usually with text one has to remove the least frequent words in order to avoid overfitting. You might also want to check the additional features, remove anything which happens too rarely.
>
Also, the confusion matrix, has given me weird outputs
```
0 1
0 [[2036 161]
1 [ 1 2196]]
```
Observations:
- True class 0 has 2036+161 = 2197 instances, true class 1 has 1+2196=2197 instances. These results are obtained with the resampled data.
- Assuming class 1 is positive: 2196 True Positives (TP), 2036 TN, 161 FP (true positive predicted as negative) and 1 FN (true negative predicted as positive).
recall = 0.999, precision = 0.932. That's an f1-score somewhere higher than 0.95 (probably due to the resampled data).
- The second confusion matrix is also clearly obtained with the resampled data, and it shows perfect performance (F1-score is 1).
These matrices show the performance obtained on the resampled data, so it's similar to the performance on the training data. Since the performance on a real test set is much lower, this confirms strong overfitting.
|
112840
|
1
|
112846
| null |
2
|
675
|
I'm in the middle of learning about Transformer layers, and I feel like I've got enough of the general idea behind them to be dangerous. I'm designing a neural network and my team would like to include them, but we're unsure how to proceed with the encoded sequences and what the right way to plug the them into the next layer in the model would be. We would like to process it such that we can plug the encoded sequence into a FC layer immediately after the Transformer Encoder.
If we just use a batch size of 1, for the sake of the argument, our encoded sequence output after being processed by the Transformer Encoder has shape tuple of (L,E), where L is the input sequence length and E is the embedded dimension size. I've seen some vague description of using some max/avg/conv1d pooling on the Encoded sequence, but nothing super clear about what that means. If I'm following this correctly, would I apply the max/avg/1conv1d pooling such that the pooling result gives me an resulting vector with shape tuple (E,), or would I pool along the other dimension?
|
What to do with Transformer Encoder output?
|
CC BY-SA 4.0
| null |
2022-07-21T02:21:34.780
|
2022-07-25T07:40:28.763
| null | null |
138349
|
[
"neural-network",
"transformer",
"encoder",
"pooling"
] |
The typical approach for this is follow [BERT](https://arxiv.org/abs/1810.04805)'s approach: add an extra special token at the beginning of the input sequence (in BERT it is `[CLS]`) and only use the output of the network at that position as input to your fully connected layer. The output at the rest of the positions is ignored.
You can see a nice illustration of this approach in the notorious blog post [The Illustrated BERT](https://jalammar.github.io/illustrated-bert/), which explains very visually all the details about BERT:
[](https://i.stack.imgur.com/xx0CX.png)
In the illustration, you can see the model input at the bottom and how it has been added a special `[CLS]` token at the beginning and then the output of the model at that position is then used for a classification task.
During training, the model will learn to condense the needed information from the whole sentence into the output of the first position.
Another alternative, as you pointed out, is to have global average pooling over all the outputs. This was the norm in the LSTM times before Transformers came. I am not aware of any articles comparing the performance of both approaches but, nowadays, with Transformers, everybody uses the BERT approach.
Both BERT's approach and the global average/max pooling approach achieve your goal: collapsing the variable length sequence of vectors into a single vector that you can then use for classification.
|
Transformer decoder output - how is it linear?
|
>
I'm not quite sure how's the decoder output is flattened into a single vector
That's the thing. It isn't flattened into a single vector. The linear transformation is applied to all $M$ vectors in the sequence individually. These vectors have a fixed dimension, which is why it works.
|
112850
|
1
|
112854
| null |
0
|
910
|
I have a pandas dataframe `df` that looks like this:
```
col1 col2 col3
A X 1
B Y 2
C Z 3
```
I want to convert this into a dictionary with `col1` and `col2` as a tuple key and col3 as value. So, the output would look like this:
```
{
('A', 'X'): 1,
('B', 'Y'): 2,
('C', 'Z'): 3
}
```
How do I get my desired output?
|
Export pandas dataframe to dictionary as tuple keys and value
|
CC BY-SA 4.0
| null |
2022-07-21T08:50:41.997
|
2022-07-21T13:08:01.220
| null | null |
138357
|
[
"python",
"pandas",
"data-wrangling"
] |
```
df.set_index(['col_1', 'col_2']).apply(dict, axis=1).reset_index(name='new_key')
df.drop(labels=['col_1', 'col_2'], axis=1)
```
|
Export pandas to dictionary by combining multiple row values
|
Does this do what you want it to?
```
from pandas import DataFrame
df = DataFrame([['A', 123, 1], ['B', 345, 5], ['C', 712, 4], ['B', 768, 2], ['A', 318, 9], ['C', 178, 6], ['A', 321, 3]], columns=['name', 'value1', 'value2'])
d = {}
for i in df['name'].unique():
d[i] = [{df['value1'][j]: df['value2'][j]} for j in df[df['name']==i].index]
```
This returns
```
Out[89]:
{'A': [{123: 1}, {318: 9}, {321: 3}],
'B': [{345: 5}, {768: 2}],
'C': [{712: 4}, {178: 6}]}
```
|
112877
|
1
|
112893
| null |
0
|
34
|
The title says it all. I was researching this question but couldn't find something useful. What is the difference between adding words to a tokenizer and training a tokenizer?
|
What is the difference between adding words to a tokenizer and training a tokenizer?
|
CC BY-SA 4.0
| null |
2022-07-22T12:38:24.550
|
2022-07-23T08:36:51.977
| null | null |
133184
|
[
"deep-learning",
"nlp",
"tokenization"
] |
First, a clarification: tokenizers receive text and return tokens. These tokens may be words or not. Some tokenizers, for instance, return word pieces (i.e. subwords). This way, a single word may lead to multiple tokens (e.g. "magnificently" --> ["magn", "ific", "ently"]). Some examples of subword tokenizers are [Byte-Pair Encoding (BPE)](https://huggingface.co/course/chapter6/5?fw=pt) and [Unigram](https://huggingface.co/course/chapter6/7?fw=pt). Therefore, adding a "word" to a tokenizer may not make sense for a subword-level tokenizer; instead, I will refer to it as "adding a token".
Some simple tokenizers rely on pre-existing boundaries between tokens. For instance, it is very common to tokenize by relying on the white space between words (after a previous mild pre-processing to separate punctuation).
Depending on the complexity of the separation of tokens from the text, the tokenization process can consist of just a lookup in a table to a complex computation of probabilities.
For simple tokenizers that only consist of a lookup table, adding a token to it is simple: you just add an entry to the table.
For more complex tokenizers, you need a training process that learns the needed information to later tokenize. In those cases, adding a token is simply not possible, because the information stored in the tokenizer is richer, not just a table with entries.
|
NLP: what are the advantages of using a subword tokenizer as opposed to the standard word tokenizer?
|
Subword tokenization is the norm nowadays in NLP models because:
- It mostly avoids the out-of-vocabulary (OOV) word problem. Word vocabularies cannot handle words that are not in the training data. This is a problem for morphologically-rich languages, proper nouns, etc. Subword vocabularies allow representing these words. By having subword tokens (and ensuring the individual characters are part of the subword vocabulary), makes it possible to encode words that were not even in the training data. There's still the problem with characters not present in the training data, but that's tolerable in most of the cases.
- It gives manageable vocabulary sizes. Current neural networks need a pre-defined closed discrete token vocabulary. The vocabulary size that a neural network can handle is far smaller than the number of different words (surface forms) in most normal languages, especially morphologically-rich ones (and especially agglutinative ones).
- Mitigates data sparsity. In a word-based vocabulary, low-frequency words may appear very few times in the training data. This is especially troublesome for agglutinative languages, where a surface form may be the result of concatenating multiple affixes. Using subword tokenization allows token reusing, and increases the frequency of their appearance.
- Neural networks perform very well with them. In all sorts of tasks, they excel: neural machine translation, NER, etc, you name it, the state of the art models are subword-based: BERT, GPT-3, Electra,...
|
112891
|
1
|
112892
| null |
0
|
100
|
When I convert my multilingual transformer model to a single lingual transformer model (got my languages embedding from the multilingual transformer and deleted other embeddings, decreased dimensions of embedding layers), the loss is much less. But I didn't understand why. What can be the reason for that?
|
Smaller embedding size causes lower loss
|
CC BY-SA 4.0
| null |
2022-07-23T07:30:15.043
|
2022-07-25T14:05:01.833
|
2022-07-25T14:05:01.833
|
43000
|
133184
|
[
"deep-learning",
"nlp",
"transformer",
"tokenization"
] |
### New Answer
The loss of a text generation task like question generation is normally the average categorical cross-entropy of the output at every time step.
Drastically reducing the number of tokens means that the number of classes of the output probability distribution is greatly reduced.
The value of cross-entropy depends on the number of classes. Having more classes means that the output distribution must cover more options and it is more difficult to assign more probability to the ground truth class (i.e. the correct token).
Therefore, it is to be expected that, if you drastically reduce the number of tokens, the value of the loss is lower.
### Old answer
From your description, I understand that:
- What you had was a Transformer trained on multilingual data with word-level tokens (because if you had subword-level tokens like BPE or unigram then you would not be able to filter by language from the token list so easily).
- What you did was:
Removing the entries associated with words in other languages from the token list.
Reduce the embedding size.
Retrain your model on the data of a single language pair.
With those assumptions:
When you "converted your model from multilingual to single lingual", you simplified the task enormously. It seems that the gain in the simplicity of the task surpassed the loss of capacity of the model caused by the reduction of the embedding size.
|
Why are bigger embedding vectors not necessarily better?
|
You can think about phenomenons close to the curse of dimensionality.
Embedding words in a high dimension space requires more data to enforce density and significance of the representation.
A good embedding space (when aiming unsupervised semantic learning) is characterized by orthogonal projections of unrelated words and near directions of related ones. For neural models like word2vec, the optimization problem (maximizing the log-likelihood of conditional probabilities of words) might become hard to compute and converge in high dimensional spaces.
You’ll often have to find the right balance between data amount/variety and representation space size.
|
112900
|
1
|
112902
| null |
0
|
28
|
I have been thinking about the problem of "predicting" damages awarded in legal cases. For specificity, let us be given a dataset of summaries of cases of a certain flavour (say discrimination cases) that have been binned in a fixed number of "bands" by ranges of damages awarded. Then is it possible to train a custom model to be able to read the facts of a case as reported by an aggrieved party and predict which bin it would fall into should the plaintiff win. My first thought is unsupervised text clustering via NLP. Is there something more efficient that can be used here?
|
How can I implement classification for this problem?
|
CC BY-SA 4.0
| null |
2022-07-23T12:58:34.537
|
2022-07-23T13:09:36.980
| null | null |
138434
|
[
"nlp",
"clustering",
"predictive-modeling",
"prediction"
] |
If I understand the problem correctly, the input dataset consists of a 2 columns.
Column A - Previous Case Summary, Column B - the range/bin of damage awarded
And you want to map a new unseen case summary, to one of the existing column B values ranges/bin based on the similarity of new case summary to most similar Column A case summary text.
I recently worked on a similar problem, where instead of case summary, I had fields/labels mapped to their description and I wanted to map a new/unseen field to one of the given descriptions.
[Mapping of an unseen Field/word to an existing description (in the input data), given Field and their respective descriptions as input/training data](https://datascience.stackexchange.com/questions/112095/mapping-of-an-unseen-field-word-to-an-existing-description-in-the-input-data)
My approach was doing Bert Embedding and then doing cosine similarity on field/labels and based on the similarity value to one of the existing fields, taking its description.
This could be one of the approaches.
Let me know, if you need the sample code. Happy to help.
|
Looking for other opinions on approach to classification problem
|
You should use text classification techniques. The most basic one is multinomial naive Bayes classifier with tf-idf features. for this method, take a look at this:
[https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html](https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html)
If you don’t get enough accuracy (or maybe precision, recall or f-score), you could test more complex techniques e.g. using deep LSTM networks with word embedding. For this method, take a look at this:
[https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/](https://machinelearningmastery.com/use-word-embedding-layers-deep-learning-keras/)
|
112918
|
1
|
112920
| null |
1
|
2701
|
I have successfully loaded my data into DataLoader with the code below:
```
train_loader = torch.utils.data.DataLoader(train_dataset, 32, shuffle=True)
```
I am trying to display a multiple images using the code below:
```
examples = next(iter(train_loader))
for label, img in enumerate(examples):
print(img.shape) # [32, 3, 224, 224]
```
How would I print each image in the batchsize using plt.imshow, as well as show the label? (Note: This is the CatDogDataset)
|
Using Dataloader to display an image
|
CC BY-SA 4.0
| null |
2022-07-25T01:50:59.297
|
2022-07-25T08:59:16.130
|
2022-07-25T01:53:39.687
|
138392
|
138392
|
[
"pytorch"
] |
```
train_loader = torch.utils.data.DataLoader(train_dataset, 32, shuffle=True)
examples = next(iter(train_loader))
for label, img in enumerate(examples):
plt.imshow(img.permute(1,2,0))
plt.show()
print(f"Label: {label}")
```
Reference
[https://pytorch.org/tutorials/beginner/basics/data_tutorial.html](https://pytorch.org/tutorials/beginner/basics/data_tutorial.html)
|
What should the output sizing be for a class that returns multiple image arrays for a dataloader
|
You are almost there. You just need to perform the batch inference correctly. So, while model inference you need to convert the list of images to a single tensor, as follows -
```
for i, data in enumerate(dataloader_all, 0):
inputs = torch.stack(data['image'])
outputs = model_vgg16(inputs)
```
Your code might be creating a tensor of list of images, while the model expects tensor of tensors (tensor of image tensors). You need to give a tensor of shape (num_images,3,244,244) as an input to your model. Where num_images is batch-size (4 or more in your case). You should go ahead and try to print and see the tensor shapes.
Also, just make sure your dataloader is converting your images to tensor. I think you are already doing this because you mentioned you already made single image inference.
|
112959
|
1
|
112967
| null |
0
|
31
|
I have seen images of lst and rnn units online, where they "unravel" the unit.
[](https://i.stack.imgur.com/lEHa6.png)
- Is this only one, singular, unit?
- If you have multiple units in a cell (layer), are both the cell state and hidden state carried through to the next unit? (or are they recycled in each unit)
- By ht and ht-1, I assume that all memories are stored in an array? (or is it 1 vector)
- I read in an article that the length of cell state and hidden state is equal to the amount of units in a cell (layer). If this is true, do each units output multiple predictions on the same thing or different things?
---
Image #2 (response to an answer)
[](https://i.stack.imgur.com/Mkt6c.jpg)
|
does "unravelling" lstm units still mean one unit
|
CC BY-SA 4.0
| null |
2022-07-26T13:11:17.213
|
2022-07-27T06:26:46.417
|
2022-07-27T06:26:46.417
|
138534
|
138534
|
[
"machine-learning",
"neural-network",
"lstm"
] |
- The "unraveling" you are referring to is just to illustrate how the different time steps of the input are received and processed. It doesn't have anything to do with the number of units. The "number of units" actually refers to the dimensionality of the input vector and the hidden state.
- The output and hidden state are passed to the computation of the next time step.
- $h_t$ and $h_{t-1}$ and vectors that have been computed at different time steps. Depending on how you configure of the LSTM, you may get all $h_i$'s (e.g. to apply attention over them) or just the last one (e.g. to perform classification).
- As I mentioned in (1), the "number of units" actually refers to the dimensionality of the input vector and the hidden state so, what you read is true. The prediction at reach time step is a vector of real numbers.
|
The model of LSTM with more than one unit
|
The [last image](https://i.stack.imgur.com/rnFCb.jpg) that you draw, where (at a given timestep) each cell of the second layer receives input from all cells in the first layer, is the right one.
You can think of recurrent layers as fully connected layers which receive sequences of input and apply their transformations at each timestep. They just receive other inputs depending on their previous inputs but the connectivity between layer is exactly the same as a fully connected : each unit of a given layer performs a weighted average of the activations of all the units in the previous layer (or of the inputs if it's the first layer).
|
112969
|
1
|
112971
| null |
1
|
47
|
Sometimes you read text and you have a strong feeling that it was translated from a certain language.
For example, you read Russian text, see «взять автобус» («take bus» instead of Russian «сесть в автобус» (literally «sit on bus»)), and it becomes obvious that the text was originally written in English and then translated by low-qualified translator.
Provided you have a long text, can you automatically detect if it is translation or is it originally written in this language, and can you detect the source language? Are there any ready solutions?
|
Can you detect source language of a translation?
|
CC BY-SA 4.0
| null |
2022-07-26T15:54:22.000
|
2022-07-26T16:09:04.110
| null | null |
138546
|
[
"nlp"
] |
In the machine translation research community, the translated text that exhibits some traits from the original language is called "translationese".
There are multiple lines of research that try to spot translationese (i.e. tell apart text that has been translated, either by human or machine, from text written directly). [Here](https://aclanthology.org/search/?q=translationese) you can see academic articles related to the matter.
However, I have not been able to find research that studies the feasibility of identifying the original source language of the translation, let alone ready-made solutions.
|
Are CNNs indeed translation invariant?
|
Short answer - No, CNNs are not really translation invariant.
I specifically mean the style of image classification network in the paper you mentioned (i.e., (input) > (conv layer) > ... > (conv layer) > (fully conn layer) ... > (fully conn layer) > (output)).
This is partly because of the difference between translation invariance and translation equivariance (an important distinction, imo). [See this question and the answers](https://datascience.stackexchange.com/questions/16060/what-is-the-difference-between-equivariant-to-translation-and-invariant-to-tr/16084)
## Why
The last few layers in the network are fully connected (FC). FC layers are definitely not translation invariant (they don't give consideration to the spatial relationship between the inputs). But, the whole network could still be translation invariant if everything before the first FC layer is translation invariant.
We have convolution layers before the FC layers, but convolution layers are translation equivariant, not translation invariant (see below, it'll make sense). Therefore the whole network is not (really) translation invariant.
Adding pooling layers after convolution makes the network invariant to small translation motions.
## Convolution operations are translation equivariant
Translation equivariance of a function means that it's output for a translated version of the input is a translated version of the output.
if $$f(x(i)) = y(i)$$
then $$f(x(i-t)) = y(i-t)$$
where $i$ is the spatial index
### Compare to translation invariance
Translation invariance means that the output of a translated version of the intput, is exactly the same as the output for the original input.
if $$f(x(i)) = y(i)$$
then $$f(x(i-t)) = y(i)$$
where $i$ is the spatial index
|
112973
|
1
|
113021
| null |
0
|
30
|
I have the following problem: I'm trying to fit a deep learning CNN model in google colab with a dataset of cats and dogs, it is very popular in Kaggle, i've cleaned the dataset of none images with a method, and the model train well for a few iterations but in a moment the code throw the following error:
[](https://i.stack.imgur.com/k2S4l.png)
And here a picture where the code works well in the first iterations, how i can solve this? I'm thinking it's a dataset problem but i'm not sure. I am a student of Deep Learning with Tensorflow
Here it's my colab notebook: [https://colab.research.google.com/drive/1GzLma_-DMHOe1-injn4d_2KiI6kcwasZ?usp=sharing](https://colab.research.google.com/drive/1GzLma_-DMHOe1-injn4d_2KiI6kcwasZ?usp=sharing)
[](https://i.stack.imgur.com/P80pE.png)
|
CNN Deep Learning model fit problem with Tensorflow
|
CC BY-SA 4.0
| null |
2022-07-26T17:11:28.177
|
2022-07-28T04:23:44.993
| null | null |
138551
|
[
"keras",
"tensorflow"
] |
There are 2 things you can do.
- Verify that there are no corrupted images, and all files in the directories are actual images.
If you do not fix this before training you will get errors regarding these issues and training will fail once these files are reached.
Running the following `bash` commands in the base directory will resolve these issues:
```
find /tmp/data/ -size 0 -exec rm {} +
find /tmp/data/ -type f ! -name "*.jpg" -exec rm {} +
```
- You should add a Resizing layer in your model such that you are sure that any image will be passed to the model in the right format.
|
Tensorflow-keras Image Classifier error while fitting
|
The error appears to be related to the type of your input data, may be worth checking for it with `type(X)`.
I would suggest loading pickle with pandas
```
import pandas as pd
X = pd.read_pickle(r'filepath')
X = X.astype('uint8')
```
Also for info, in your code above you are using the Keras API which is meant to be a high-level API for TensorFlow.
|
112984
|
1
|
112986
| null |
0
|
23
|
I am solving a Multiple Linear Regression problem and judging the model by looking at R-square and Adjusted R-square metrics. In recent iteration which are yielding desired coefficients directionally with respect to Target, I am getting both R-square and Adjusted R-square as 0.73. Can this be possible or is something not right?
|
Can both R-square and Adjusted R-square be same?
|
CC BY-SA 4.0
| null |
2022-07-27T04:55:18.643
|
2022-07-27T12:09:44.933
|
2022-07-27T12:09:44.933
|
43000
|
117220
|
[
"predictive-modeling",
"statistics",
"linear-regression",
"r-squared"
] |
R square value assumes each independent variable (IV) in the model contributes to explaining the variance in the dependent variable.
Adjusted R square, on the other hand, is computed and thus includes only those dependent variables that are statistically significant and actually contribute in the variance explanation of the dependent variable.
You can check the difference in the two values if you build a model with s forward step, adding one IV to the model at a time and increasing the complexity of the model
The R- square will increase, but Adjusted R swuare might increase or decrease depending on the statistical significance significance of the newly added IV.
In your case, of you are getting same values for both of them, implies all independent variables are statistically significant in explaining the variance.
|
Why is r squared lowered when adding polynomial features?
|
You've got 25 points, so there is a perfect fitting polynomial of degree 24. That doesn't happen, so something is breaking in the OLS solver, but I'm not sure of what exactly or how to detect that. It's not too surprising though that you may have numerical issues when `p` gets large: you've got an x-value near 0.1 and others past 10; raising them to the 24th power pushes them very far apart, and probably generates many more significant digits than python is keeping around.
I've put together a demonstration:
[https://github.com/bmreiniger/datascience.stackexchange/blob/master/53818.ipynb](https://github.com/bmreiniger/datascience.stackexchange/blob/master/53818.ipynb)
Scaling the x-values helps, though we still don't find something visually matching the perfect polynomial fit.
See also [https://stats.stackexchange.com/questions/350130/why-is-gradient-descent-so-bad-at-optimizing-polynomial-regression](https://stats.stackexchange.com/questions/350130/why-is-gradient-descent-so-bad-at-optimizing-polynomial-regression)
|
113022
|
1
|
113028
| null |
0
|
53
|
I'm training a tree-based model (e.g. xgb). I have some features with more than 90% values constant. Does it add value to the model since the variation in the data is minimal?.
What would be the impact of the same if I were to use a linear regression model?
|
What is the implication of having features with less variation in a tree based model?
|
CC BY-SA 4.0
| null |
2022-07-28T06:45:36.380
|
2022-08-09T05:37:43.203
| null | null |
102713
|
[
"machine-learning",
"linear-regression",
"decision-trees",
"xgboost"
] |
Variation is not the key. Notice that 0/1 indicator variables are used frequently and might have mostly 0's (like many missing indicators). The key is where is the variation in relation to what you are predicting and in relation to interactions.
For example, if your column is 0 where target = 0 and not 0 where target = 1, then the variation does not matter. Add a new indicator feature where the original column is 0 or not 0 may be a good way. You might want to do this anyway.
Also with trees, columns are interacting, so if the lack of variation in the column highlights better predictive power in another column, then this is a win. Again, a transformation to an indicator variable may be useful.
Similar to a linear model. You add the interactions yourself with linear models.
Of course, the column may not be useful at all to the model, lack of variation or not.
No way to know unless you try with your data.
|
Tree based method are robust against low probability feature space zones when using ML general interpretability methods?
|
I would say trees are "differently" robust in this sense.
A tree model will never predict a target value outside the range of those in the training set; so never a negative value for a count, or more infections than the population, etc. (Some tree-based models might, e.g. gradient boosting, but not a single tree or a random forest.)
But sometimes that's detrimental, too. In your bikes example, maybe city population is another variable; your model will quickly become useless as the city grows, while a linear model may cope with the concept drift better.
Finally, again in your bike example: because the tree has no reason to make rules about winter when temp>12, as @SvanBalen says, it will essentially be making up an answer if you ask it about a hot winter. In your tree's case, hot winters are treated as summers; another tree might split first on season, never considering temperature in the winter branch, so that this alternative tree will treat hot winters as winters.
It seems better to try to track the independent variables' concept drift and interdependencies to recognize when the model hasn't seen enough useful training data to make accurate predictions.
|
113060
|
1
|
113068
| null |
1
|
102
|
I'm a little confused between the following terminology: pretrained, finetune and feature extract. I would like to use an out-of-the-box model to train a covid dataset. If I were to use resnet, would I be pretraining it? In what situation would I be finetuning the model or feature extracting? Since the model is being pretrained, would it be wise to use the same weights being trained on the ImageNet?
|
Difference between pretrained, finetune, feature extract
|
CC BY-SA 4.0
| null |
2022-07-29T02:03:45.343
|
2022-07-31T07:44:16.333
| null | null |
138392
|
[
"pytorch"
] |
Although fine-tuning may refer to some improvement of an existing model, it is not an improvement, but rather a transfer learning process to adapt a pre-trained model to new data.
I disagree with the fact that fine-tuning refers to transfer learning because it leads to confusion indeed.
Consequently, you fine-tune a pre-trained model in order to be able to learn efficiently on new data, thanks to the already learned data on the pre-trained model, which has more general extracted features.
Without using a pre-trained model, a model trained from scratch is not able to differentiate data easily and there is poor feature extraction.
[https://d2l.ai/chapter_computer-vision/fine-tuning.html](https://d2l.ai/chapter_computer-vision/fine-tuning.html)
If you train a pre-trained model on a new dataset, the new dataset should have some similarities with the original dataset from the first training. For instance, if you want to train a pre-trained dataset with an unknown animal (ex: pangolins), it should be ok if it knows already many other animals. But if you train it with completely new data (ex: 3d medical scans) without any similarity with the already known ones, it would be able to recognize if they are 3d medical scans, but it may not differentiate very well the different types of medical scans.
Consequently, in the case of completely new data like 3 medical scans, it would be better to train the model from scratch, unless the model have already learned on 3 medical scans pictures.
|
What is the difference between feature generation and feature extraction?
|
Feature Generation -- This is the process of taking raw, unstructured data and defining features (i.e. variables) for potential use in your statistical analysis. For instance, in the case of text mining you may begin with a raw log of thousands of text messages (e.g. SMS, email, social network messages, etc) and generate features by removing low-value words (i.e. stopwords), using certain size blocks of words (i.e. n-grams) or applying other rules.
Feature Extraction -- After generating features, it is often necessary to test transformations of the original features and select a subset of this pool of potential original and derived features for use in your model (i.e. feature extraction and selection). Testing derived values is a common step because the data may contain important information which has a non-linear pattern or relationship with your outcome, thus the importance of the data element may only be apparent in its transformed state (e.g. higher order derivatives). Using too many features can result in multicollinearity or otherwise confound statistical models, whereas extracting the minimum number of features to suit the purpose of your analysis follows the principle of parsimony.
Enhancing your feature space in this way is often a necessary step in classification of images or other data objects because the raw feature space is typically filled with an overwhelming amount of unstructured and irrelevant data that comprises what's often referred to as "noise" in the paradigm of a "signal" and "noise" (which is to say that some data has predictive value and other data does not). By enhancing the feature space you can better identify the important data which has predictive or other value in your analysis (i.e. the "signal") while removing confounding information (i.e. "noise").
|
113070
|
1
|
113075
| null |
0
|
49
|
I have a basic doubt. Kindly clarify this.
My doubt is, When we are using LSTM's, We pass the words sequentially and get some hidden representations.
Now transformers also does the same thing except non sequentially. But I have seen that the output of BERT based models can be used as word embeddings.
Why can't we use the output of LSTM also as a word embedding? I can find sentence similarity and all with LSTM also ?
For eg : If I have a sentence " is it very hot out there"
Now I will apply word2Vec and get dense representations and pass it to my LSTM model. The output of my LSTM can also be used as word embeddings as we do the same with BERT?
My understanding was that LSTM is used to identify dependencies between words and using that learned weights to perform classification/similar tasks.
|
Transformers vs RNN basic doubt
|
CC BY-SA 4.0
| null |
2022-07-29T09:44:23.213
|
2022-07-29T10:26:50.817
| null | null |
96653
|
[
"machine-learning",
"nlp",
"lstm",
"bert",
"transformer"
] |
There are multiple concepts mixed in your question.
- Contextual vs. non-contextual word embeddings: word2vec is a non-contextual approach to obtaining token embeddings. This means that a specific word has the same vector representation regardless of the other words in the sentence it appears. BERT, on the other hand, can be used to obtain contextual representations, because the representations of a token depend directly on the other words in the sentence.
- Contextual word embeddings with LSTMs. You can obtain contextual word embeddings with LSTMs. Actually, BERT has 2 predecessors that are just that. These models are ULMFit and ELMo. Both are bidirectional LSTMs. The fact that they are bidirectional is important here, otherwise, the representations would only be contextual for the words to the right of each word.
- Using BERT or LSTMs for classification and other tasks. Both BERT and LSTMs are suitable to perform text classification.
In the case of BERT, the sentence-level representation is obtained by prefixing the sentence with the special token [CLS] and taking the representations obtained at that position as sentence representation (this is trained with the next-sentence prediction task in the BERT training procedure).
In the case of LSTMs, the sentence-level representation is usually obtained either as the last output of a unidirectional LSTM or by global pooling over all the representations of a bidirectional LSTM.
|
RNN basic doubt
|
No, they will not have the same final output.
Although the weights of the RNN are the same for each time step and the words are the same, their order is not and therefore the inputs and hidden states received at each time step will be different, and so will their outputs.
You said it yourself: `The next word will be based on the current and previous processed words.` . The next and previous words for each time step are not the same in two sentences with the same words but in different order.
|
113086
|
1
|
113104
| null |
3
|
412
|
I am building a binary classification model using GB Classifier for imbalanced data with event rate 0.11% having sample size of 350000 records (split into 70% training & 30% testing).
I have successfully tuned hyperparameters using GridsearchCV, and have confirmed my final model for evaluation.
Results are:
Train Data-
[[244741 2]
[ 234 23]]
```
precision recall f1-score support
0 1.00 1.00 1.00 244743
1 0.92 0.09 0.16 257
accuracy - - 1.00 245000
```
macro avg 0.96 0.54 0.58 245000
weighted avg 1.00 1.00 1.00 245000
test data -
[[104873 4]
[ 121 2]]
```
precision recall f1-score support
0 1.00 1.00 1.00 104877
1 0.33 0.02 0.03 123
accuracy - - 1.00 105000
```
macro avg 0.67 0.51 0.52 105000
weighted avg 1.00 1.00 1.00 105000
AUC for both class 1 & 0 is 0.96
I an not sure if this is a good model I can use for predicting probability of occurrence.
Please guide.
[](https://i.stack.imgur.com/ujytX.png)
[](https://i.stack.imgur.com/rwtDi.png)
[](https://i.stack.imgur.com/4QOO0.png)
|
Am I building a good or bad model for prediction built using Gradient Boosting Classifier Algorithm?
|
CC BY-SA 4.0
|
0
|
2022-07-29T14:09:05.730
|
2022-07-30T07:35:12.773
|
2022-07-29T17:52:01.640
|
138661
|
138661
|
[
"python",
"classification",
"gradient-boosting-decision-trees"
] |
"Unbalanced" data are not a problem, unless you use unsuitable error measures... like [accuracy](https://stats.stackexchange.com/q/312780/1352), or precision, recall and the F1 (or any other Fbeta) score, all of which suffer from exactly the same problems as accuracy. Instead, work directly with probabilistic predictions, and assess the probabilistic predictions directly using proper scoring rules.
Do not use thresholds in evaluating your statistical model. [The choice of one or more (!) thresholds is an aspect of the decision, together with your probabilistic classification. It is not part of the statistical model.](https://stats.stackexchange.com/a/312124/1352)
We have many, many, many threads on unbalanced data at CrossValidated, and we are at a bit of a loss what to do with these, because the data science community apparently sees a problem here [that completely disappears once you move away from intuitive but misleading evaluation measures](https://stats.stackexchange.com/q/357466/1352). We have [a Meta.CV thread](https://stats.meta.stackexchange.com/q/6349/1352) dedicated to this, with a number of links to other CV threads.
|
Gradient boosting algorithm example
|
I tried to construct the following simple example (mostly for my self-understanding) which I hope could be useful for you. If someone else notices any mistake please let me know. This is somehow based on the following nice explanation of gradient boosting [http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/](http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/)
The example aims to predict salary per month (in dollars) based on whether or not the observation has own house, own car and own family/children. Suppose we have a dataset of three observations where the first variable is 'have own house', the second is 'have own car' and the third variable is 'have family/children', and target is 'salary per month'. The observations are
1.- (Yes,Yes, Yes, 10000)
2.-(No, No, No, 25)
3.-(Yes,No,No,5000)
Choose a number $M$ of boosting stages, say $M=1$. The first step of gradient boosting algorithm is to start with an initial model $F_{0}$. This model is a constant defined by $\mathrm{arg min}_{\gamma}\sum_{i=1}^3L(y_{i},\gamma)$ in our case, where $L$ is the loss function. Suppose that we are working with the usual loss function $L(y_{i},\gamma)=\frac{1}{2}(y_{i}-\gamma)^{2}$. When this is the case, this constant is equal to the mean of the outputs $y_{i}$, so in our case $\frac{10000+25+5000}{3}=5008.3$. So our initial model is $F_{0}(x)=5008.3$ (which maps every observation $x$ (e.g. (No,Yes,No)) to 5008.3.
Next we should create a new dataset, which is the previous dataset but instead of $y_{i}$ we take the residuals $r_{i0}=-\frac{\partial{L(y_{i},F_{0}(x_{i}))}}{\partial{F_{0}(x_{i})}}$. In our case, we have $r_{i0}=y_{i}-F_{0}(x_{i})=y_{i}-5008.3$. So our dataset becomes
1.- (Yes,Yes, Yes, 4991.6)
2.-(No, No, No, -4983.3)
3.-(Yes,No,No,-8.3)
The next step is to fit a base learner $h$ to this new dataset. Usually the base learner is a decision tree, so we use this.
Now assume that we constructed the following decision tree $h$. I constructed this tree using entropy and information gain formulas but probably I made some mistake, however for our purposes we can assume it's correct. For a more detailed example, please check
[https://www.saedsayad.com/decision_tree.htm](https://www.saedsayad.com/decision_tree.htm)
The constructed tree is:
[](https://i.stack.imgur.com/yRjle.png)
Let's call this decision tree $h_{0}$. The next step is to find a constant $\lambda_{0}=\mathrm{arg\;min}_{\lambda}\sum_{i=1}^{3}L(y_{i},F_{0}(x_{i})+\lambda{h_{0}(x_{i})})$. Therefore, we want a constant $\lambda$ minimizing
$C=\frac{1}{2}(10000-(5008.3+\lambda*{4991.6}))^{2}+\frac{1}{2}(25-(5008.3+\lambda(-4983.3)))^{2}+\frac{1}{2}(5000-(5008.3+\lambda(-8.3)))^{2}$.
This is where gradient descent comes in handy.
Suppose that we start at $P_{0}=0$. Choose the learning rate equal to $\eta=0.01$. We have
$\frac{\partial{C}}{\partial{\lambda}}=(10000-(5008.3+\lambda*4991.6))(-4991.6)+(25-(5008.3+\lambda(-4983.3)))*4983.3+(5000-(5008.3+\lambda(-8.3)))*8.3$.
Then our next value $P_{1}$ is given by $P_{1}=0-\eta{\frac{\partial{C}}{\partial{\lambda}}(0)}=0-.01(-4991.6*4991.7+4983.4*(-4983.3)+(-8.3)*8.3)$.
Repeat this step $N$ times, and suppose that the last value is $P_{N}$. If $N$ is sufficiently large and $\eta$ is sufficiently small then $\lambda:=P_{N}$ should be the value where $\sum_{i=1}^{3}L(y_{i},F_{0}(x_{i})+\lambda{h_{0}(x_{i})})$ is minimized. If this is the case, then our $\lambda_{0}$ will be equal to $P_{N}$. Just for the sake of it, suppose that $P_{N}=0.5$ (so that $\sum_{i=1}^{3}L(y_{i},F_{0}(x_{i})+\lambda{h_{0}(x_{i})})$ is minimized at $\lambda:=0.5$). Therefore, $\lambda_{0}=0.5$.
The next step is to update our initial model $F_{0}$ by $F_{1}(x):=F_{0}(x)+\lambda_{0}h_{0}(x)$. Since our number of boosting stages is just one, then this is our final model $F_{1}$.
Now suppose that I want to predict a new observation $x=$(Yes,Yes,No) (so this person does have own house and own car but no children). What is the salary per month of this person? We just compute $F_{1}(x)=F_{0}(x)+\lambda_{0}h_{0}(x)=5008.3+0.5*4991.6=7504.1$. So this person earns $7504.1 per month according to our model.
|
113126
|
1
|
113154
| null |
2
|
344
|
I have a ML problem where I want to divide the prediction task into subproblems (where I believe specialized models will do better). All these predictions tasks operate independently and will use the same input data - but will have different estimators/targets.
For example:
- single dataset (A)
- shared transformations A -> B
- estimator #1: random forests with target Y1
- estimator #2: GBM classifier with target Y2
- estimator #3: logistic regression with target Y2
- the predictions of each of these models will be output as a tuple (#1, #2, #3)
I'm looking for a simple (or best practice way) to define the above pipeline and train it and be able to use it for prediction. I have looked at sklearn Pipeline but best I can tell you can't use that to have multiple estimators for training/predictions (would love to learn I'm wrong on this).
My fallback option is to build a class that supports `fit` and `predict_proba` but under the hood just calls these models sequentially (training in sequence & generating predictions in sequence before returning the tuple of results).
Is there a better way to go about this problem?
|
How to build single pipeline with multiple estimators supporting fit and predict?
|
CC BY-SA 4.0
| null |
2022-07-31T15:15:02.603
|
2022-08-01T16:34:53.277
| null | null |
135531
|
[
"machine-learning",
"scikit-learn"
] |
Scikit-learn pipelines are designed to chain operations, they are not designed to handle conditional logic.
Your problem is better handled in Python-based logic. Something like:
```
from sklearn.ensemble import GradientBoostingClassifier, RandomForestClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import StandardScaler
pipe_1 = make_pipeline(StandardScaler(), RandomForestClassifier())
pipe_2 = make_pipeline(StandardScaler(), GradientBoostingClassifier())
pipe_3 = make_pipeline(StandardScaler(), LogisticRegression())
pipe_1.fit(X, y1)
pipe_2.fit(X, y2)
pipe_3.fit(X, y2)
predictions = (pipe_1.predict(X), pipe_2.predict(X), pipe_3.predict(X))
```
|
Combining sklearn pipelines with different output shape
|
`sklearn` doesn't yet really provide a good way to remove rows in pipelines. [SLEP001 proposes it](https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep001/proposal.html#examples-of-usecases-targetted). `imblearn` has some ways to make this work, but it's semantically specific to resampling data. If you don't need to modify the target (if you'll only use this transformer on `X`, and not in a pipeline with a supervised model), you can make this work. One more caveat: you probably won't want to throw away outliers in production, so consider how you'll rework this transformer after training.
The point is that you should wait to remove the rows with `OUTLIER` entries until after you've joined the datetime features back on. (One alternative is to try to pass the information about which rows were removed to the datetime processor, but that would then require a custom alternative to `FunctionUnion` or `ColumnTransformer`.) Unfortunately, despite all of your custom transformers returning dataframes, the ways to recombine them (`ColumnTransformer` and `FeatureUnion`) won't preserve that yet (but see [pandas-out PR](https://github.com/scikit-learn/scikit-learn/pull/23734) and some linked issues/PRs). Until that's remedied, your best bet might be to modify your transformers to accept an `__init__` parameter `columns` on which to operate, removing the `FeatureSelector` step.
```
outlier_prune = Pipeline([
('iqr_filter', IQRFilter(columns=num_cols)),
('remove_outliers', RemoveIQROutliers()),
]) # important: the output of this is a frame
numerical_pipeline = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('std_scaler', StandardScaler())
])
preproc_pipeline = ColumnTransformer([
('numerical_pipeline', numerical_pipeline, num_cols),
('date_eng', ExtractDay(), date_cols),
])
full_pipeline = Pipeline([
('outliers', outlier_prune),
('preproc', preproc_pipeline),
])
```
[](https://i.stack.imgur.com/ml8mX.png)
|
113134
|
1
|
113142
| null |
1
|
80
|
Labeling images for semantic segmentation can be expensive. Is it viable to train a model (such as Unet) to a good accuracy and then use this model to label more images to be used as further training data for the same model? Would this cause overfitting?
|
Can you use a trained image segmentation model to label more training data for itself?
|
CC BY-SA 4.0
| null |
2022-07-31T22:41:36.150
|
2022-08-01T11:33:43.133
|
2022-07-31T22:42:03.593
|
138728
|
138728
|
[
"deep-learning",
"overfitting",
"image-segmentation"
] |
I assume you're thinking of only using images where you are confident the model has segmented them correctly? I don't think this would cause overfitting - at least what we normally think of as overfitting. However, you could end up training the model to do even better on images where it already does well, at the expense of worse results where it is not doing so well (which I guess you could think of as a type of overfitting).
There is a technique called active learning that does something similar to this, though. Here you use the original model to identify images that would help improve the model the most, if they were labelled and added to the training set. These are then labelled by your domain experts and the model retrained. Obviously you can repeat this if need be until you stop seeing any improvement. See these blogs on active learning for more details: [Active learning machine learning: What it is and how it works by DataRobot](https://www.datarobot.com/blog/active-learning-machine-learning/) and [Active Learning: Curious AI Algorithms on DataCamp](https://www.datacamp.com/tutorial/active-learning)
While I was writing this answer I found this article: [Active Learning in Machine Learning Explained by Vatsal on Towards Data Science](https://towardsdatascience.com/active-learning-in-machine-learning-explained-777c42bd52fa) that suggests combining your approach and active learning.
|
training when Multiple labels per image
|
Assuming you want to classify the images (and not use bounding boxes to locate classes within each image), a common way it to create a target vector for each image, which holds the information regarding all classes and is what the model would eventually predict.
If you have a dataset with, say 5 classes, and your first example image contains classes 1 and 4, you would create your target vector for that image to be:
```
example_sample = ... # your image array
example_sample_y = [1, 0, 0, 1, 0]
```
This is a kind of [one-hot encoding](https://hackernoon.com/what-is-one-hot-encoding-why-and-when-do-you-have-to-use-it-e3c6186d008f), as the vector has a placeholder for each of the 5 classes, but only a `1` when the class is present.
Have a look at this [high-level walkthrough](https://towardsdatascience.com/multi-label-image-classification-with-inception-net-cbb2ee538e30).
---
I think your other suggestion of training an image
You want to learn some kind of joint probability between the classes, and in my opinion, training one the same image with different outcomes (e.g. the sample image above twice, producing either a 1 or a 4) will not only be very inefficient during training, but will also be mathematically confusing. The same input can give 2 possible outputs! This implies your underlying function that maps images to classes is not [well-defined](http://mathworld.wolfram.com/Well-Defined.html). That isn't usually a good thing!
|
113147
|
1
|
113153
| null |
2
|
525
|
I would like to know from the data science community here for suggestions on nlp courses.
I am new to NLP area and would like to take up a course which covers from basic to advanced concepts such as tokenization to embeddings, GPT-3, transformers etc
My aim is to become a Applied NLP expert (and I don't intend to invent any new algos). So, basically am trying to find a course where they can teach us existing algos, recent advancements, variety of use-cases etc in NLP
Is there any courses that you would recommend?
|
Suggestions for guided NLP online courses - Beginner 101
|
CC BY-SA 4.0
| null |
2022-08-01T14:43:18.983
|
2022-08-01T17:56:09.937
| null | null |
64876
|
[
"machine-learning",
"deep-learning",
"nlp",
"text-mining",
"text"
] |
I would recommend two course which focus on code first approach and which will help you understand concepts by getting your hands dirty. Both of these courses contains code and video resources.
- Fast.ai NLP
- Hugging Face NLP
Happy Learning :)
|
Please let me know if I am on the right track to being an NLP Expert
|
You are definitely doing a great job of getting your basics down. I really like Patrick Winston's [AI Course](https://www.youtube.com/watch?v=TjZBTDzGeGg), he does a great job of conceptualizing the math behind these problems, which is the only place I think Ng lacks.
Find a ton of papers you think are interesting, and read them top to bottom. Here is one from spotify on [NLP](http://benanne.github.io/2014/08/05/spotify-cnns.html)(super awesome)
Most importantly, IMO, the thing you need to start doing is applying the stuff you learn, to problems you think are interesting. Do a few run throughs of other stuff on github and then start doing your own!
Good luck, hope that was helpful:)
|
113165
|
1
|
113429
| null |
0
|
51
|
There is now tons of material available on how to do certain (most popular) ML tasks and what kind of output you can expect.
However I found that resources on how to select appropriate ML task/approach given specific problem are very coarse and scarce. I can't find anything better than "use rnn/lstm for time series prediction" or "k-means for classification"
Are there publications/Internet resources available that dedicated purely to teaching how to
- define you problem in a way that would suit specific ML approach
- select best ML model within the approach?
|
Where to learn which ML task is most appropriate for a problem?
|
CC BY-SA 4.0
| null |
2022-08-01T21:46:47.473
|
2022-08-11T14:07:10.307
| null | null |
27142
|
[
"machine-learning",
"model-selection"
] |
This should help you. I have used it many times. It's very straightforward.
[https://medium.com/analytics-vidhya/which-machine-learning-algorithm-should-you-use-by-problem-type-a53967326566](https://medium.com/analytics-vidhya/which-machine-learning-algorithm-should-you-use-by-problem-type-a53967326566)
[](https://i.stack.imgur.com/rhOzC.png)
|
Process of solving a problem using ML
|
Going to your first question,
>
What does "cleaning" the data consists of? How do you know if your
data needs cleaning anyway?
Cleaning refers to the various processes to transform the data so that we can utilize it to the fullest. Removing unwanted features, incomplete entries, NaN or null values ( if the dataset is numeric ) consist of "cleaning" the data. This process is important because directly feeding the unclean data to the model may result in its stunted performance or runtime errors.
Once you have a large dataset, you need to transform it according to the problem which you are solving. If you are training a model to classify movie reviews as positive or negative then you can easily remove columns like "user_id", "category" etc. as these do not contribute to the polarity of the review.
>
What features of the data determines which algorithm I should use? Or
is it mostly trial and error?
Well, the algorithm you choose will mostly depend on your problem. Decision trees are good for smaller datasets and Deep Neural Networks ( DNN ) would be used to complex classification and regression problems.
Text classifier systems use embedding layers, TF-IDF vectorization, n-grams model. We basically choose a model on these factors :
- Size of the dataset.
- The complexity of the problem.
- Computational resources ( in some cases ).
We can always play around with the hyperparameters and also modify the model so that it better fulfils our need.
>
Are there any additional things that I should be doing other than
trying different algorithms and testing how well they fit the data?
We choose a model based on the problem. CNNs have been prevalent in image-related problems. Word embeddings are useful in text classification. LSTMs are used in time-series-related problems.
Tip: You can try to implement various algorithms from scratch ( without using `scikit-learn` or ML frameworks ). This helps you in developing an intuition regarding how the model learns from the data and makes predictions.
|
113183
|
1
|
113196
| null |
2
|
157
|
I am building a project for my bachelor thesis and am wondering how to prepare my raw data. The goal is to program some kind of semantic search for job postings. My data set consists of stored web pages in HTML format, each containing the detail page of a job posting. Via an interface I want to fill in predefined fields like skills, highest qualification, etc. with comma-separated sentences or words. These are then embedded via a Hugging Face Transformer and afterwards the similarity of the input is to be compared with the already embedded job postings and the "best match" is returned.
I have already found that intensive preprocessing such as stop word removal and lemmatization is not necessarily required for transformers. However, the data should be processed to resemble the data on which the pre-trained transformers learned. What would be the best way to prepare such a data set to fine-tune pre-trained Hugging Face Transformers?
Additional info: 55,000 of the saved web pages contain an annotation scheme via which I could simply extract the respective sections "Skills" etc. from the HTML text. If that is not sufficient, I can use prodigy to further annotate the data, e.g. by span labeling texts within the text of the job postings.
Thank you very much in advance!
|
What Preprocessing is Needed for Semantic Search Using Pre-trained Hugging Face Transformers?
|
CC BY-SA 4.0
| null |
2022-08-02T12:08:18.307
|
2022-08-02T18:40:51.003
| null | null |
138783
|
[
"nlp",
"dataset",
"preprocessing",
"transformer",
"huggingface"
] |
Resumes are quite different from classic text because there are many proper nouns (names, companies, places, etc.) and other data difficult to classify (phone numbers, marks, age, etc.).
That's why you can use lighter versions like DistilBert to train your data on resumes and get good results.
Therefore, you should first separate every paragraph and label them to classify resumes correctly.
You can also use pre-trained models like [this one](https://huggingface.co/manishiitg/distilbert-resume-parts-classify?text=doctorate) and fine-tune them with your data.
However, this is not a semantic search yet. After classifying resumes content correctly, you can use a [semantic transformer](https://www.sbert.net/docs/pretrained_models.html) to look for field similarity among the same resumes category.
Note: the computing power might be very high if you have thousands of CVs to compare with, even if you detect the search category and process the comparisons in one category only.
|
HuggingFace hate detection model
|
Check this article:
[https://medium.com/geekculture/simple-chatbot-using-bert-and-pytorch-part-1-2735643e0baa](https://medium.com/geekculture/simple-chatbot-using-bert-and-pytorch-part-1-2735643e0baa)
Model training with explanation is given
|
113192
|
1
|
113217
| null |
1
|
28
|
I have three variables measured at a sensor: Temperature (T), Humidity (H), and Methane Concentration (PPM). There are physical reasons why changes in T and H will influence PPM. I am interested in removing changes caused to PPM by changes in T and H. What I would like to see is an expected value of PPM with effects of delta T and H removed. Below is a plot over several days of measured values of T, H, and PPM. Additionally, this is one of many sensors. I need a way of generating a model for each individual sensor as this correlation is specific to component tolerances.
[](https://i.stack.imgur.com/HcgV0.png)
I'm looking for direction on where to start with this. What algorithm would you use? What's the simplest solution to try and get an expected PPM reading that minimizes the effects of delta T and H?
|
Remove Noise Caused by Other Variables to Predict an Expected Value
|
CC BY-SA 4.0
| null |
2022-08-02T16:21:17.947
|
2022-08-03T16:56:21.813
| null | null |
138790
|
[
"predictive-modeling",
"correlation"
] |
Here are 2 interesting algorithms :
- Multivariate LSTM. LSTM cells are great to find patterns having cycles with 50 to 300 timesteps. Be aware that it is quite sensitive to noise, so apply smoothing algorithms to see if predictions are better.
- Random Forest. Even if it hasn't a long memory like the LSTM cells, Random Forest is great to find correlations between signals. In some cases, Random Forest have even better results than LSTMs.
In addition to that, you seem to have very precise data and it might be not necessary for prediction tasks. In fact, having too precise data could reduce prediction accuracy, because algorithms have to memorize more data and hence can produce more prediction errors.
Consequently, I recommend reducing the sampling rate as low as possible, without altering the overall data quality. Such a simplification should be "humanly understandable" (= not too precise and not too simplified). You can do this by replacing the values of 10 records with 1 using their mean value. Otherwise, you could get poor predictions and too many calculations.
|
regression with noisy target vairable
|
It depends how much noise:
- If it's only a little noise, say for instance 2% of the target values are off by a small value, then you can safely ignore it since the regression method will rely on the most frequent patterns anyway.
- If it's a lot of noise, like 50% of the target values are totally random, then unless you can detect and remove the noisy instances you can forget it: the dataset is useless.
In general ML algorithms are based on statistical principles, to some extent their job is to avoid the noise and focus on the regular patterns. But there are two things to pay attention to:
- Is the noise truly random, or does it introduce some biases in the data? The latter is a much more serious issue.
- Noisy data is even more likely to cause overfitting, so extra precaution should be taken against it: depending on the data, it might be necessary to reduce the number of features and/or the complexity of the model.
|
113199
|
1
|
113202
| null |
0
|
94
|
I was reading [this article](https://www.analyticsvidhya.com/blog/2021/03/basic-ensemble-technique-in-machine-learning/) talking about ensemble models. I was interested in the max voting model using 3 base learners. However, I am a little confused about the process. Currently, I'm thinking it goes like this: I have a training and testing sets. All three models are trained on the training set individually and finally at the end I combine the 3 models and do max voting on the testing set and see the results. Instead, should the original training set be divided such that each base learner does not see the same training data?
|
If you are making a ensemble model does training data on base models have to be different from one another
|
CC BY-SA 4.0
| null |
2022-08-02T19:08:34.777
|
2022-08-06T12:39:27.523
|
2022-08-02T20:41:14.947
|
43000
|
138799
|
[
"machine-learning",
"dataset",
"ensemble-modeling"
] |
When ensembling, you need some method of introducing diversity into your models (otherwise all your models will make the same prediction, so ensembling them won't improve the results). Using different training data for each model is one way of introducing this diversity. A common method is to use bootstrapping or bagging, where you randomly sample (with replacement) from your training data. This is what the random forest algorithm does (although it also randomly selects the features for even more diversity). As pointed out by @desertnaut, you do your initial test/training split first, then form your ensemble training sets using only the training data.
However, there are several other ways to introduce diversity into your models:
- Boosting - where the models are trained in sequence. Each model re-weights the training samples, increasing the weight of samples the previous model classified incorrectly and decreasing the weight of those previously classified correctly. This is how AdaBoost works.
- Use different classifiers - e.g. if you want 3 learners you could ensemble a logistic regression model, an SVM and a neural network.
- Use different architectures or hyper-parameters, so use SVMs with different kernels or different sized neural networks.
- If using neural networks, initialise each network differently, so that when trained, each model converges to a different solution.
|
Ensemble Model vs Normal model
|
Firstly, welcome to the site!
When do we use Ensemble model?
when there are 2 models which perform moderately then we combine their results to get a model which performs better. In your scenario you already have a model which gives you good results what is the point of implementing Ensemble Models?
As @Tagoma said, it depends on your data and your goal. For example, you are trying to predict stock rates, every 0.01% matters. In such scenarios you need to use complex algorithms to maintain balance on that slim line i.e., not to over-fit, not to over-train and just predict.
Measures to check if your model is over-trained is by giving some random data and see how it performs, add some noise in the training data.
One more important thing to do is, to check for the Predictor Importance and see if there is any highly correlated feature with the target variable. For example, you are planning to predict age and if you have DOB as a feature in that yes of course you would predict with 99.99% accuracy but that is not what we use ML for.
If all of them are satisfied and achieving that accuracy then it means that your model's performance is good.
Finally, Implementation of Ensemble is dependent on your business problem and your understanding on business.
|
113200
|
1
|
113208
| null |
1
|
45
|
I built an R RandomForest Regression model. The source training data is a historical monthly report of all closed tickets, and the data for forecasting/prediction is a report of open tickets. These reports are generated by another team.
I test/train the model using two years of historical closed ticket data, and predict (forecast) a ŷ Completion Date for each open ticket.
The closed tickets training data looks like this:
|ID |Dollars |Fruit |Etc |StartDate |CompletionDate |
|--|-------|-----|---|---------|--------------|
|AA088 |500 |Apple |... |1/1/2020 |2/15/2020 |
|AB100 |1000 |Apple |... |1/1/2020 |5/15/2020 |
|AB101 |2000 |Banana |... |1/1/2020 |5/15/2020 |
|BB723 |5000 |Apple |... |1/5/2020 |3/20/2020 |
|BB724 |3000 |Lime |... |1/5/2020 |3/20/2020 |
|BB725 |1000 |Orange |... |1/5/2020 |3/20/2020 |
The open ticket data looks similar, except it lacks CompletionDate, and sometimes various fields are "Unknown" at this time.
To build the model, I withhold "ID", make all categorical values factors, use CompletionDate as my y variable, and train the RandomForest on a majority of available features.
Recently, the team that generates this data threw a curve ball, rather than each row being a single record, rows are line-items of a higher level ticket! A majority of tickets have only one line-item, the remaining tickets can have between 2 and 6 line-items.
|ID_Parent |ID_Row |Dollars |Fruit |Etc |StartDate |CompletionDate |
|---------|------|-------|-----|---|---------|--------------|
|AA |088 |500 |Apple |... |1/1/2020 |2/15/2020 |
|AB |100 |1000 |Apple |... |1/1/2020 |5/15/2020 |
|AB |101 |2000 |Banana |... |1/1/2020 |5/15/2020 |
|BB |723 |5000 |Apple |... |1/5/2020 |3/20/2020 |
|BB |724 |3000 |Lime |... |1/5/2020 |3/20/2020 |
|BB |725 |1000 |Orange |... |1/5/2020 |3/20/2020 |
I have considered to summarize (rollup) records, which is easy for numeric value like Dollar (`Sum(Dollars)`). I could concatenate the multiple categorical values, however, each factor is independent and has strong predictive value to the model (i.e. line items with "Apple" has a weight / meaning that would be lost if I simply concatenated as a string with the other row's value)
|ID_Parent |SumDollars |ConcatenatedFruit |Etc |StartDate |CompletionDate |
|---------|----------|-----------------|---|---------|--------------|
|AA |500 |Apple |... |1/1/2020 |2/15/2020 |
|AB |3000 |Apple, Banana |... |1/1/2020 |5/15/2020 |
|BB |9000 |Apple, Lime, Orange |... |1/5/2020 |3/20/2020 |
How should I handle a categorical feature like Fruit that contains multiple factors?
Can RandomForest accept a feature that contains multiple factors? Do I need to use a different type of model?
|
How to build a predictive model with multiple features?
|
CC BY-SA 4.0
| null |
2022-08-02T22:54:10.190
|
2022-08-03T09:38:31.950
|
2022-08-02T23:01:45.123
|
138801
|
138801
|
[
"machine-learning",
"predictive-modeling",
"feature-selection",
"random-forest",
"feature-extraction"
] |
My recommendations is to OneHotEncode this variable, to finally obtain something like this:
|ID_Parent |SumDollars |ConcatenatedFruit_Apple |ConcatenatedFruit_Banana |ConcatenatedFruit_Lime |ConcatenatedFruit_Orange |Etc |StartDate |CompletionDate |
|---------|----------|-----------------------|------------------------|----------------------|------------------------|---|---------|--------------|
|AA |500 |1 |0 |0 |0 |... |1/1/2020 |2/15/2020 |
|AB |3000 |1 |1 |0 |0 |... |1/1/2020 |5/15/2020 |
|AB |9000 |1 |0 |1 |1 |... |5/1/2020 |3/20/2020 |
Moreover, if you OneHotEncoded this way, random forest can deal perfectly with this categorical feature.
Here I provide you one code that will do what I commented:
```
import pandas as pd
df = pd.DataFrame({'id': [0, 1, 2], 'class': ['2 3', '1 3', '3 5']})
df['class'] = df['class'].apply(lambda x: x.split(' '))
df_long = df.explode('class')
df_one_hot_encoded = pd.concat([df, pd.get_dummies(df_long['class'],prefix='class', prefix_sep='_')], axis=1)
df_one_hot_encoded_compact = df_one_hot_encoded.groupby('id').max().reset_index()
```
I've extracted it from [here](https://stackoverflow.com/questions/37646473/how-could-i-do-one-hot-encoding-with-multiple-values-in-one-cell) (answered by OmaymaS)
|
How to predict based on multiple samples?
|
This is a reasonably standard problem for supervised ML:
- The class is the variable "dropped_out"
- Given the goal to predict a variable which is specific to a particular student, an instance must represent a student, not an exam.
This definition of what an instance should consist of seems to be the part that you didn't reach yet: you correctly saw that you need to join the two datasets but in your example you join them by exam id. As a result you obtain "instances" which each represent a particular exam by a particular student, and of course the same student might appear several times in the data. The solution is to join your datasets by student id in order to make a single instance contain all the information for one student, i.e. something like this:
```
AGE, RESULT_TEST1, RESULT_TEST2, SCORE_EXAM1, SCORE_EXAM2, SCORE_EXAM3,...., DROPPED_OUT
```
However it seems that the exams are not normalized, so I see two options:
- Simplification: for each student, give only some summary statistics about their performance at exams, for example min, max, avg, std dev for both the score and the complexity. This gives a fixed number of features (8 in my example), each with a specific role so that the ML method can "make sense" of it.
- Refactor the data: if possible, rearrange the exam data so that a column corresponds to the same exam for different students. This would mean that the exam complexity is not needed anymore, because the distribution of the grades is the only thing which matters. It's ok to have some missing/undefined values for the students who didn't take a particular exam, most ML methods can deal with that.
The second option is very likely to give better results than the first, but it might be impractical to transform the data this way.
|
113236
|
1
|
113266
| null |
0
|
17
|
I am exploring tensorflow's object detection algorithm.
Prior to training I had to mark boxes around my items in the training dataset images. This was fed into training. Does the environment (surroundings) outside of the box marking matter in tensorflow's object detection algorithm? Or is the training based only on the contents inside of the marked box?
|
Does the environment matter (area outside the box) in tensorflow's object detection algorithm?
|
CC BY-SA 4.0
| null |
2022-08-04T07:55:00.783
|
2022-08-05T09:31:31.140
| null | null |
89478
|
[
"tensorflow",
"object-detection"
] |
The training is based on the boxes content only, but during the detection process, the algorithm has to scan all the image.
Consequently, there is no learning of the environment outside the box.
Such algorithms only focus on [detecting specific objects](https://arxiv.org/pdf/1311.2524.pdf), independently from their surrounding environment.
However, tensorflow could be used to apply [contextual object recognition](https://github.com/justinkay/context-rcnn-d2), but it requires additional function such as an attention mechanism.
|
Using Tensorflow object detection API vs Keras
|
Keras provides you high level api or can say wrapper written on top of multiple backends.
These back ends have the core implementation of DNN.
List of Keras supported backends are:
- Tensorflow
- Theano
- CNTK
**Source: [Keras documentation for supported backends](https://keras.io/backend/)
>
Keras hides a bit complexity of DNN implementation, but again restrict
your freedom. In case if you write a code in Tensorflow, you have
explicitly specifies and calculate optimizer, cost function and other
things, but it provides you flexibility.
So for me writing in Keras just a convenience.
As far my knowledge is concern, so far we dont have any fix formula to identify number of layers sufficient for Object detection :).
|
113246
|
1
|
113411
| null |
0
|
83
|
I am studying time series analysis to apply on a new project. Well, I am confronting a dilemma that I need some help.
When I read an old version of ggplot2 book ([https://ggplot2-book.org/](https://ggplot2-book.org/)), I guess were the 2nd edition, Wickham applied the following algorithm:
- Created some columns based on the date column (month and day of week);
- Parsed these columns as factors;
- Trained a linear model; and,
- Evaluated residuals.
It is important to say that the objective was create a model to analyze the seasonality. In other words, it was not interested to generate a forecasting model. Other important information, as you may guess, this book was written using R as the programing language.
Well, I am transitioning to Python, and I need to accomplish a similar task. In true, I am using the IterativeImputer from scikit-learn to fill the missing data. On the data preparation, I took the first step as before, however I am worried about the second step. Considering this factor column has a cardinality, I did not apply any other transformation, as dummy variables, for example, but I am not sure if I am correct in my decision. I also maintain the column as a float.
More than this, I read some articles about times series forecasting to understand more the tools that are available. Well, one thing that I see was to manually input the lag values and then use a supervised technique to forecast the date. I believe I can improve the results using this picking the autocorrelation to select the lag values.
Summarizing my questions:
- When should I apply a dummy transformation for factor features in Python?
- When data present cardinality, should I maintain the column as float?
- In general, what of the two techniques show better results?
- Are there other forms to lead with this situation?
Thanks
|
Time Series Data Imputation
|
CC BY-SA 4.0
| null |
2022-08-04T14:03:49.243
|
2022-08-11T07:24:04.930
| null | null |
138591
|
[
"python",
"scikit-learn",
"time-series"
] |
It seems that I need to improve my feature engineering. When I read this [https://mlcourse.ai/book/topic09/topic9_part1_time_series_python.html](https://mlcourse.ai/book/topic09/topic9_part1_time_series_python.html) tutorial, I understand this, specially the section [https://mlcourse.ai/book/topic09/topic9_part1_time_series_python.html#linear-and-not-only-models-for-time-series](https://mlcourse.ai/book/topic09/topic9_part1_time_series_python.html#linear-and-not-only-models-for-time-series).
PS.: the cardinality to factorial features stills a thing that I do not figure out.
|
How to fill missing consumption data on time series?
|
The approach you're trying to describe is being able to `fill` the gaps in your data.
# Filling N/A in the data
Since you're working in Python, I'm guessing your data is stored as a Dataframe. Pandas has a specific function for this: [DataFrame.fillna()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.fillna.html).
This lets you fill any `NaN` values with multiple methods.
There are some similar examples in this [answer](https://stackoverflow.com/a/18691949/9314815).
# Filling N/A and changing the following item
From my knowledge, Dataframes don't yet have any functionality to do this.
The best option I can think of is to iterate through the series. You could either convert to a list with `.tolist()` then use a `for` loop, or use [Series.iteritems()](https://pandas.pydata.org/docs/reference/api/pandas.Series.iteritems.html)
In your loop, you'll need a condition to check if `NaN` and if so, take the average of the current and next item if the current item is
`NaN`. You may also need to have a condition for the edge case if the final value in the list is `NaN`
|
113309
|
1
|
113315
| null |
0
|
695
|
I have pandas dataframes - test & train,they both have `text` and `label` as columns as shown below -
```
label text
fear ignition problems will appear
joy enjoying the ride
```
As usual, to run any Transformers model from the HuggingFace, I am converting these dataframes into `Dataset` class, and creating the classLabels (fear=0, joy=1) like this -
```
from datasets import DatasetDict
traindts = Dataset.from_pandas(traindf)
traindts = traindts.class_encode_column("label")
testdts = Dataset.from_pandas(testdf)
testdts = testdts.class_encode_column("label")
```
Finally these `Datasets` are put into `DatasetDict`like this-
```
emotions = DatasetDict({
"train" : traindts ,
"test" : testdts
})
```
Everything works well but as you see that the way I am doing it can be definitely improved. How can it be done more efficiently in less number of lines ?
|
Creating class labels for custom DataSets efficiently (HuggingFace)
|
CC BY-SA 4.0
| null |
2022-08-07T19:33:44.003
|
2022-08-09T04:12:46.887
| null | null |
138956
|
[
"nlp",
"transformer",
"huggingface"
] |
This is a coding style issue, so people may well have different opinions! But I don't see any problem with the way you've coded it.
If you really want to reduce the number of lines of code you could combine the two assignments to traindts into one statement, and the same with testdts:
```
traindts = Dataset.from_pandas(traindf).class_encode_column("label")
testdts = Dataset.from_pandas(testdf).class_encode_column("label")
```
If you don't use traindts and testdts anywhere else, you could then even remove the assignment statements altogether and move all the code into the call to DatasetDict:
```
emotions = DatasetDict({
"train" : Dataset.from_pandas(traindf).class_encode_column("label"),
"test" : Dataset.from_pandas(traindf).class_encode_column("label")
})
```
But then you are sacrificing readability for fewer lines of code. So if it were me, I probably would make the first change, but wouldn't bother with the second one.
|
How to split train/test datasets according to labels' classes
|
You can use the argument `stratify=Y_source` to maintain the proportions after splitting.
|
113313
|
1
|
113316
| null |
0
|
425
|
Many paper and books say that sigmoid activation function with random intialization is prone to vanishing/exploding gradients therefore it is better to use LeakyRelu, Elu, or Relu. Does this mean that we should use them in final layer of binary classificiation as well?
|
If sigmoid activation function is prone to vanishing and exploding gradients can we not use it in final layer of binary classfication?
|
CC BY-SA 4.0
| null |
2022-08-08T07:15:26.033
|
2022-08-08T08:31:02.937
| null | null |
70546
|
[
"machine-learning",
"deep-learning",
"activation-function"
] |
Vanishing & Exploding Gradient problem happens in case of deep neural network. In NN when we have to update weights & biases for each layer we calculate the partial derivate with respect to y_hat at each layer (Back Propogation Algorithm). Because in this case weights are multiplied in chain with each other.
As Sigmoid is used in last layer it will only be just gradient and does not have impact of other layer while initial layer will be multiplies by weights of earlier layer leading to Vanishing Gradient problem.
So Sigmoid in last layer does not lead to Vanishing Gradient problem and you can use it safely.
|
Using sigmoid in binary DNN output layer instead of softmax?
|
In a binary classification problem you have only 2 classes, let's call them the negative and the positive class.
You only need to ouptut 1 number which corresponds to the probability of your input point to belong to the positive class.
The sigmoid activation function is good for that because it maps any input value to the range ]0,1[ which is what we want for a probability (it is not a real issue that 0 and 1 are excluded).
Since you have only one output number, it makes no sense to use a softmax activation.
Softmax activation is used in the multiclass problem where you must predict 1 of N classes where N is greater or equal than 3 and in this case the number of outputs is N (1 probability by class).
The softmax function makes that all your outputs sum to 1 and it amplifies the gap between high and low probabilities.
|
113343
|
1
|
113344
| null |
0
|
38
|
I'll start with some examples. Think about a sentence like "Mazda CX5 is a good car.". NLTK sentiment analysis module "Vader" will give a positive polarity score on the sentence. Meanwhile a positive score will also be assigned to a sentence like "Mazda CX5 is a better car compared to Subaru Forester." However, the sentence in fact has a negative sentiment towards Subaru Forester. I wonder if there is any algorithm can actually identify such sentiment difference between the general sentiment and sentiment against a certain word in the sentence.
|
Is there any sentiment analysis algorithm to identify sentiment of a sentence towards a certain word in the sentence?
|
CC BY-SA 4.0
| null |
2022-08-09T02:21:16.717
|
2022-08-09T04:03:10.067
|
2022-08-09T02:28:53.013
|
139008
|
139008
|
[
"nlp",
"sentiment-analysis"
] |
Aspect based analysis tried to solve above problem. It categorizes data by aspect and assign sentiment to it. Lets say you have a restaurant review :
Food was good and service was bad
It will create 2 categories :
- Aspect : Food Sentiment : Positive
- Aspect : Service Sentiment : Negative
|
Clustering text data based on sentiment?
|
In my opinion there are two main problems with your approach:
- The clustering is extremely unlikely to correspond to sentiment, unless the features that you use for clustering are specifically engineered to represent sentiment. In general text clustering tend to group documents by common words, i.e. similar topic. This might lead to different categories of reviews by type of product, for example.
- The second and I think most important issue is that without any labelled data, you can't evaluate the system. A common mistake would be to use the classes obtained from the clustering in order to evaluate the classification model: this doesn't evaluate the full task of sentiment analysis since there's no way to know how well the clustering represents sentiment. The proper method is to manually annotate a random subset of documents for the purpose of evaluation.
Also in general the second part with the classification model is not needed because the unsupervised clustering model can directly be applied to new instances.
|
113352
|
1
|
113354
| null |
0
|
836
|
Given a list of strings L1 `L1 = ['a', 'b', 'c']`, I need to extract the rows which contain the values given in list L1. I used the isin function: `df[df['column1'].isin(L1)]`
The data contains the following values in a column 1:
- 'a'
- 'c'
- 'a, d'
- 'brp'
The data contains the following values in a column 2:
- ['a']
- ['c']
- ['a', 'd']
- ['brp']
The output I need should print all the rows because the string 'a' is present in L1, but, the output returns only 3 rows: rows 1, 2 (that is the rows containing strings 'a', 'c')
How do I modify the code so that it returns the 3rd row as well?
|
Str.contains and isin function do not return all correct rows of dataframe
|
CC BY-SA 4.0
| null |
2022-08-09T08:38:31.023
|
2022-08-09T10:08:38.143
| null | null |
136954
|
[
"python",
"dataset",
"pandas",
"dataframe"
] |
You can use the [str.contains](https://pandas.pydata.org/docs/reference/api/pandas.Series.str.contains.html) method for this using a regex pattern:
```
import pandas as pd
L1 = ["a", "b", "c"]
df = pd.DataFrame({
"column1": ["a", "c", "a, d", "brp"]
})
# use the '|' character to check if the strings contains any of the characters in L1
df[df["column1"].str.contains("|".join(L1))]
```
|
Applying a matching function for string and substring with missing values on a python dataframe
|
This corresponds to a deduplication or [record linkage](https://en.wikipedia.org/wiki/Record_linkage) problem.
There are various ways to compare records (numbers in your case), but the main issue is almost always about [the double loop](https://datascience.stackexchange.com/q/54570/64377): in the general problem, every possible pair of records must be compared.
In case there are too many numbers for the double loop, you could implement the blocking technique [described here](https://datascience.stackexchange.com/a/68413/64377).
Your design may have an additional issue: your matching method is not transitive, i.e. you can have cases where $a$ matches $b$, $b$ matches $c$ but $a$ doesn't $c$. Apparently you plan to solve this by picking the first match. This might not be optimal for matching a maximum of numbers.
I'm not expert at all with pandas but I doubt that there would be any predefined function which does what you need. `factorize` relies on strict equality, it's much simpler because it can collect all the unique values in one pass.
|
113359
|
1
|
113366
| null |
2
|
660
|
I would like to train a BERT model from scratch. I read the paper as well as a few online material. It seems there is no preprocessing involved. e.g. removing punctuation, stopwords ...
I wonder why is it like that and would that improve them model if I do so ?
|
why there is no preprocessing step for training BERT?
|
CC BY-SA 4.0
| null |
2022-08-09T12:33:30.090
|
2022-08-10T08:56:16.710
| null | null |
86325
|
[
"deep-learning",
"nlp",
"bert"
] |
Although a definitive answer can only be obtained by actually trying it and it would depend on the specific task where we evaluate the resulting model, I would say that, in general, no, it would not improve the results to remove stopwords and punctuation.
We have to take into account that the benefit of BERT over more traditional approaches is that it learns to compute text representations in context. This means that the representations computed for a word in a specific sentence would be different from the representations for the same word in a different sentence. This context also comprises stopwords, which can very much change the meaning of a sentence. The same goes for punctuation: a question mark can certainly change the overall meaning of a sentence. Therefore, removing stopwords and punctuation would just imply removing context which BERT could have used to get better results.
That is why you will not see stopwords being removed for deep learning approaches applied to tasks where language understanding is key for success.
Furthermore, while blindly doing stopword removal has been a "tradition" in tasks like topic modeling, its usefulness is beginning to be questioned even for those tasks in [recent research](https://aclanthology.org/E17-2069/).
- regarding tokenizer
BERT has a word-piece vocabulary that was learned over the training corpus. I don't think removing tokens manually is an option here, given that they are word pieces and you wouldn't know in which words they may be used. It would be possible, however, to identify low frequency tokens by encoding and large vocabulary and then remove the lowest frequency ones. Nevertheless, BERT's vocabulary has almost 1000 (purposefully) unused tokens, so there is room to remove unused tokens if that's what you want. I don't think it would make a difference, though.
|
Fine tuning BERT without pre-training it on domain specific corpus
|
Is your corpus big enough? (= several GBs)
If yes, you could train a model from scratch and have good results.
[https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6](https://towardsdatascience.com/how-to-train-a-bert-model-from-scratch-72cfce554fc6)
If not, fine-tuning should be better. You can always try to train it from scratch but you might have sometimes wrong results. Perhaps you can add some training data from similar sources to reach an optimal result.
[https://www.tensorflow.org/tfmodels/nlp/fine_tune_bert](https://www.tensorflow.org/tfmodels/nlp/fine_tune_bert)
|
113381
|
1
|
113387
| null |
0
|
53
|
Say I split my raw data into train and test sets. Should I clean them first and denoise the datasets before I start creating new features or, should I create new features for both the train and test set and then clean/denoise them?
I'm looking to create my own Transformers for use in an sklearn ML pipeline but I am unsure about the order in which to do things.
p.s. I would be performing cross-validation and want to prevent data leakage.
|
Denoising in ML Pipeline
|
CC BY-SA 4.0
| null |
2022-08-10T12:04:33.383
|
2022-08-10T12:36:00.760
| null | null |
122363
|
[
"machine-learning",
"scikit-learn",
"data-cleaning"
] |
Not sure what kind of data you have. In case your using images, is done by image, in which case, you wouldn't have data leakage if you split before or after. But if your talking about a time-related dataset where cleaning/ denoising could consist of using moving averages of statistics from thew whole dataset, you should definitely split first to avoid that statistics from a validation/test dataset get in contact with the training dataset.
As a rule of thumb, always ask yourself: when I am using this model in production, how will the data presented to me be?
A simple and more trivial example that illustrates is when you fit a scaler in your training set to use it later in the validation/test dataset, since in production you will have a scaler pre-trained and will have to use it against the new data that is presented to you.
And more specific for the case you want to use cross-validation:
For each fold you should do your denoising of what is validation and training separately.
If that wasn't super helpful, could you clarify what type of data you are using and what exaclty do you intend to do as denoising?
|
Rendered Image Denoising
|
Your link is to paid course :) In ray-tracing too few samples will generate something like at the top [](https://i.stack.imgur.com/kT47n.png) In fact the link with the picture answers your question [https://chunky.llbit.se/path_tracing.html](https://chunky.llbit.se/path_tracing.html)
2) Ray-tracing is hard... but not impossible, google for "python ray tracing module"... But something looking close - easily [https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv](https://stackoverflow.com/questions/22937589/how-to-add-noise-gaussian-salt-and-pepper-etc-to-image-in-python-with-opencv) Although actually on the ray-traced images the noise can change because of slope and environment.
If you still want ray-traced noisy images, better to find tutorials for 3D modelling programs, like "ray tracing in 3D Studio MAX tutorial"
|
113414
|
1
|
113470
| null |
0
|
156
|
Supppose we have SVM trained on a dataset and the support vectors are $SV=\{x_1,x_2,\cdots,x_n\}$. Then, we know that the decision plan is decided by $SV$. My question is that if we remove one support vector (say $x_1$) from the dataset and train the model again. Will the other support vectors $\{x_2,\cdots,x_n\}$ still be the new support vectors after we training on the new dataset?
|
Will removing one support vector affect others?
|
CC BY-SA 4.0
| null |
2022-08-11T07:56:34.417
|
2022-08-12T16:49:38.083
| null | null |
23672
|
[
"machine-learning",
"classification",
"svm"
] |
No, you cannot say that.
Check the next image as an example, in which I removed one blue support vector and the new boundary does not use any of the old support vectors. More specifically, on the left example the margin is passing through the yellow area, while on the right one it is passing through the green one.
[](https://i.stack.imgur.com/624jF.png)
|
removing words based on a predefined vector
|
```
texts <- c("This is the first document.",
"Is this a text?",
"This is the second file.",
"This is the third text.",
"File is not this.")
test_stopword <- as.data.frame(texts)
ordinal_stopwords <- c("first","primary","second","secondary","third")
(newdata <- as.data.frame(sapply(texts, function(x) gsub(paste(ordinal_stopwords, collapse = '|'), '', x))))
```
The output is getting skewed when added in a code block ([maybe a bug in SE](https://meta.stackexchange.com/q/270069/302377)). But, you would get the desired output.
|
113449
|
1
|
113450
| null |
0
|
1412
|
```
image_datagen.flow_from_directory(
directory=src_path_train,
target_size=(100, 100),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
subset='training',
shuffle=True,
seed=42
)
```
What does shuffle in the code snippet mean? Does this indicate that the `flow_from_directory` function shuffles the images before loading them? if so, how does it help the training procedure?
Again, I'm reading an [article](https://studymachinelearning.com/keras-imagedatagenerator-with-flow_from_directory/) where the shuffle setting is `True` for training and validation but `False` for testing. Why is this different for testing?
```
train_generator = image_datagen.flow_from_directory(
directory=src_path_train,
target_size=(100, 100),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
subset='training',
shuffle=True,
seed=42
)
valid_generator = image_datagen.flow_from_directory(
directory=src_path_train,
target_size=(100, 100),
color_mode="rgb",
batch_size=batch_size,
class_mode="categorical",
subset='validation',
shuffle=True,
seed=42
)
test_generator = test_datagen.flow_from_directory(
directory=src_path_test,
target_size=(100, 100),
color_mode="rgb",
batch_size=1,
class_mode=None,
shuffle=False,
seed=42
)
```
the above code snippet is taken from the [article](https://studymachinelearning.com/keras-imagedatagenerator-with-flow_from_directory/) where the shuffle setting is True for training and validation but False for testing.
|
what does shuffle and seed parameter in Keras image_gen.flow_from_directory() signify?
|
CC BY-SA 4.0
| null |
2022-08-12T06:29:10.443
|
2022-08-12T09:52:46.343
| null | null |
139069
|
[
"neural-network",
"keras",
"tensorflow",
"image-classification",
"training"
] |
When `shuffle = True` your dataset will be randomly shuffled to avoid any overfitting in training. Passing samples in different orders makes the model more robust to overfitting. That's why during training it is advisable to turn on shuffling while during inference (validation/test), you only need to get the output, no training. Hence, no shuffling.
Even though everything is random here, you can still reproduce your result using the `seed` parameter. It will reproduce the same result every time. If you don't use seed, then at every run, your model will be different and you cannot reproduce the results.
|
What is the scope of Keras' ImageDataGenerator.flow_from_dataframe seed parameter?
|
Further investigation confirms that, in this case, Keras does indeed modify the global random number generator.
The repo has active issues and PRs that address this behaviour in other areas of the library by using a local random state e.g. [this](https://github.com/keras-team/keras/issues/12258) issue.
|
113460
|
1
|
117407
| null |
0
|
68
|
I am trying to run a LASSO Regression via the enet function (from the elasticnet library) in R on each and every one of a large number of individual csv file formatted datasets all within the same file folder for a research project where each dataset has 1 column with observations on the dependent variable called Y, and 30 columns with obs on the independent variables, called X1:X30 respectively.
I have absolutely no idea how to do this or even what search terms to use to look it up, I have already tried in both Google and Bing several times. I believe that the only packages my code as it stands requires are:
leaps
lars
stats
plyr
dplyr
readr
elasticnet
This is my code to run the LASSO Regression itself once on of you nice people help me either load the data beforehand or adjust this function in order to do that part in the function itself (obviously, I made up the dataframe names for the x & y arguments in the enet() function for this post/question):
```
## Attempt 2: Run a LASSO regression using
## the enet function from the elasticnet library
set.seed(11)
library(elasticnet)
enet_LASSO <- enet(x = as.matrix(df_all_obs_on_all_of_the_IVs),
y = df_all_obs_on_the_DV,
lambda = 0, normalize = FALSE)
print(enet_LASSO)
# In order to ascertain which predictors/regressors are still
# included in the version of the model after running a
# LASSO regression on it for the purpose of variable selection,
# I am going to use the 'predict' method from the stats package.
LASSO_coeffs <- predict(enet_LASSO,
x = as.matrix(df_all_obs_on_all_of_the_IVs),
s = 0.1, mode = "fraction", type = "coefficients")
print(LASSO_coeffs)
```
Again, I am still a newbie/novice at coding in general. My background is much stronger on the statistics, probability, and econometrics end of data science than the coding side to be honest. But I am trying to learn.
|
How to run a regression in R on multiple different csv data files within the same folder
|
CC BY-SA 4.0
| null |
2022-08-12T12:01:08.540
|
2022-12-30T03:15:02.707
|
2022-12-30T03:13:48.170
|
105709
|
105709
|
[
"machine-learning",
"r",
"feature-selection",
"research"
] |
This question is not worded very well, it needs a lot more detail and probably ought to be broken up into multiple questions honestly, but since you said this is one of the first questions you have asked on here, I'll give it a shot.
First, you'll have to assign the filepath of the file folder with the datasets in it to an object like so:
```
folderpath <- "/file-folder_filepath"
```
Then, create a list of the paths for each dataset in that folder with the following line of R code:
```
csvpaths_list <- list.files(path = folderpath, full.names = TRUE, recursive = TRUE)
```
Thence, you may read of the datasets in this folder into R with:
```
datasets_list <- lapply(csvpaths_list, read.csv)
```
And now we are finally getting somewhere! Ready to run your LASSO Regressions on each dataset in the list we just created by running:
```
LASSO.fits <- lapply(datasets_list, function(i)
enet(x = as.matrix(select(i, starts_with("X"))),
y = i$Y, lambda = 0, normalize = FALSE))
```
Let me know if this all runs and gets you more or less what you are looking for.
|
How to run a BE or FS Stepwise Regression on each dataset in a file folder full of datasets using lapply or map (without a loop)
|
First, on that many datasets, consider using fread from the data.table package rather than the standard but slow read.csv.
As for the stepwise regression function, using the step function from the stats package to do it can do done in this manner for a Forward Stepwise Regression:
```
FS_fits <- lapply(X = datasets, \(X) {
nulls <- lm(X$Y ~ 1, data = X)
full_models <- lm(X$Y ~ ., X)
forward <- stats::step(object = nulls, direction = 'forward',
scope = formula(full_models), trace = FALSE)}) )
```
Something quite similar should do for Backward Stepwise as well, except you won't need a nulls regression in it, the object will be set equal to full_models, and the direction will be 'backward' of course.
|
113489
|
1
|
113498
| null |
0
|
162
|
Can Sentence Bert embed an entire paragraph instead of only a sentence? For example, a description of a movie.
If so, what if the word counts exceeded the input limit? I remember Bert can only take up to 512 tokens as inputs?
|
Input length of Sentence BERT
|
CC BY-SA 4.0
| null |
2022-08-13T19:51:57.510
|
2022-08-14T05:37:16.250
| null | null |
130605
|
[
"nlp",
"bert"
] |
Since BERT is designed for a sentence it captures the context in a sentence. However in a paragraph they will be number of sentences and a number of context. it is not a good idea to insert paragraph in BERT. you will not get good results
You should tokenize paragraph in sentences using nltk or spacy
|
Why do BERT classification do worse with longer sequence length?
|
Some points to investigate
- Same setting with same number of epochs may perform poorer on longer sequences. With longer sequences you are increasing the complexity of the data thus you may need to increase the complexity of model e.g. by letting model to train more or add more layers
- If you are dealing with abstracts of papers, they are rich with keywords usually. You may have reached the capacity of your data by 128 words from an abstract which is already pretty covering the topic
- In general make sure you are not applying a heavy pre-processing. In neural-based sequence models, models are able to cope with some pre-processing like stop-word removal and you better keep them in your sentence as this is how the pertained models like BERT were actually trained
- In case you are destroying sentences i.e. you extract some textual features like keywords from the abstract and tokenise them they will (to some extent) work like "query". In this case there are studies that shows longer queries decrease BERT performance (See H4). More sequentially informative your input is, more you get use of sequence models. If sequence is destroyed then everything is basically reduced to a combination of word vectors thus nothing more than non-sequence methods (ranging from TF-IDF to Word Embeddings)
|
113506
|
1
|
113524
| null |
1
|
59
|
We did a POC for customer segmentation and followed the below approach
a) extract data from source system (SAP business objects)
b) Use python jupyter notebook to manipulate, merge and group data (multiple csv files)
c) We cluster based on some preset variables. So, we use the below 4 variables
a) Recency (R)
```
b) Frequency (F)
c) Cutomer duration with our company (indicates loyalty) (Y)
d) No of different market segments entered by the customers (indicates cross-selling) (P)
```
d) Run 1d kmeans algorithm (Jenks Breaks algo) for each variable. So, 4 algos are run (for 4 variables)
e) For the sake of interpretability and for easy modifications of rules based on business criteria, we also incorporate a rule to finally come up with meaningful customer segments like below
f) based on each business users defined requirement, we send out automated emails on a monthly basis
[](https://i.stack.imgur.com/3WO2R.png)
Now, my questions are as follows
a) How can I make this automated? my data gets updated every 45 days. We are always looking to create clusters of 4/5 for Recency and Frequency variable and 2 and 3 for Prod and years variable. This will not change.
b) But since, we provide results to sales users to follow up with customers, we want to be able to track the results across each run and have a dashboard to know whether a customer who needed attention is now moved to loyalist or champions segment because our sales users continuously followed up with them. We would like to measure that transition between each segments and this is planning to be used as a KPI for sales users. How can we do this?
c) Is 1d-kmeans algorithm considered as an AI algorithm?
d) How can this be made as a pipeline and any suggestions on how to improve this project further is welcome
|
Automate Clustering predictions and RFM metrics
|
CC BY-SA 4.0
| null |
2022-08-14T10:44:33.743
|
2022-08-15T07:54:34.703
| null | null |
64876
|
[
"machine-learning",
"clustering",
"data-mining",
"k-means",
"mlops"
] |
The main advantage of unsupervised learning is to be able to make meaningful clusters and hence valid scenarios.
That's why, it is not always necessary to make a fully automated solution, but rather a robust one that could be later automated.
I don't know how many features you have, but UMAP is great for clusterizing non-linear data, even if you have more than 20 features. It is also a random algorithm and it has [reproducibility](https://umap-learn.readthedocs.io/en/latest/reproducibility.html).
I recommend to use UMAP with reproducibility and then K-Means to classify the clusters automatically.
Once it is done, get the range of the data for each cluster(i.e. min/max of each cluster + stardard deviation or mean values if interesting), so that you can detect the different groups, and make valid classifications rules. Those rules can be applied for any new data without needing to go through the UMAP/K-Means process.
If there is a lot of new data in the future, it could be necessary to repeat UMAP/K-Means because of new potential groups. It depends on the data complexity over time.
Here is an [example](https://www.kaggle.com/code/bextuychiev/beautiful-umap-tutorial-on-100-dimensional-data/notebook) how to achieve this.
More information:
[Understanding UMAP interactively.](https://pair-code.github.io/understanding-umap/)
[Basic clustering with UMAP.](https://umap-learn.readthedocs.io/en/latest/clustering.html)
[How exactly UMAP works.](https://towardsdatascience.com/how-exactly-umap-works-13e3040e1668)
|
Clustering - Auto ML Solutions
|
Classification predictions can be evaluated using accuracy, whereas regression predictions cannot. Regression predictions can be evaluated using root mean squared error, whereas classification predictions cannot. Clustering is totally different! You are not looking for some accuracy measure or precision of prediction using a supervised technique, but rather, clustering is used to group data points having similar characteristics. This is why it is known as unsupervised learning.
Try Mean Shift for automatically detecting the optimal number of clusters. Here's an example. Hopefully you can adapt it for your specific use.
```
import numpy as np
from sklearn.cluster import MeanShift, estimate_bandwidth
from sklearn.datasets import make_blobs
# #############################################################################
# Generate sample data
centers = [[1, 1], [-1, -1], [1, -1], [1, -1], [1, -1]]
X, _ = make_blobs(n_samples=10000, centers=centers, cluster_std=0.2)
# #############################################################################
# Compute clustering with MeanShift
# The following bandwidth can be automatically detected using
bandwidth = estimate_bandwidth(X, quantile=0.6, n_samples=5000)
ms = MeanShift(bandwidth=bandwidth, bin_seeding=True)
ms.fit(X)
labels = ms.labels_
cluster_centers = ms.cluster_centers_
labels_unique = np.unique(labels)
n_clusters_ = len(labels_unique)
print("number of estimated clusters : %d" % n_clusters_)
# #############################################################################
# Plot result
import matplotlib.pyplot as plt
from itertools import cycle
plt.figure(1)
plt.clf()
colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk')
for k, col in zip(range(n_clusters_), colors):
my_members = labels == k
cluster_center = cluster_centers[k]
plt.plot(X[my_members, 0], X[my_members, 1], col + '.')
plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col,
markeredgecolor='k', markersize=14)
plt.title('Estimated number of clusters: %d' % n_clusters_)
plt.show()
```
[](https://i.stack.imgur.com/r71jx.png)
Or, try this.
```
import numpy as np
import pandas as pd
from sklearn.cluster import MeanShift
from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
from sklearn.datasets import make_blobs
# We will be using the make_blobs method
# in order to generate our own data.
clusters = [[2, 2, 2], [7, 7, 7], [5, 13, 13]]
X, _ = make_blobs(n_samples = 150, centers = clusters,
cluster_std = 0.60)
# After training the model, We store the
# coordinates for the cluster centers
ms = MeanShift()
ms.fit(X)
cluster_centers = ms.cluster_centers_
# Finally We plot the data points
# and centroids in a 3D graph.
fig = plt.figure()
ax = fig.add_subplot(111, projection ='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], marker ='o')
ax.scatter(cluster_centers[:, 0], cluster_centers[:, 1],
cluster_centers[:, 2], marker ='x', color ='red',
s = 300, linewidth = 5, zorder = 10)
plt.show()
```
[](https://i.stack.imgur.com/emG5I.png)
There are a few clustering methodologies that help you choose the optimal number of clusters automatically. Check out the link below for some ideas of how to move forward with your project.
[https://scikit-learn.org/stable/modules/clustering.html](https://scikit-learn.org/stable/modules/clustering.html)
|
113516
|
1
|
113528
| null |
1
|
546
|
I have the following dataset:
[https://raw.githubusercontent.com/Joffreybvn/real-estate-data-analysis/master/data/clean/belgium_real_estate.csv](https://raw.githubusercontent.com/Joffreybvn/real-estate-data-analysis/master/data/clean/belgium_real_estate.csv)
I want to predict the price column, based on the other features, basically I want to predict house price based on square meters, number of rooms, postal code, etc.
So I did the following:
Load data:
```
workspace = Workspace(subscription_id, resource_group, workspace_name)
dataset = Dataset.get_by_name(workspace, name='BelgiumRealEstate')
data =dataset.to_pandas_dataframe()
data.sample(5)
Column1 postal_code city_name type_of_property price number_of_rooms house_area fully_equipped_kitchen open_fire terrace garden surface_of_the_land number_of_facades swimming_pool state_of_the_building lattitude longitude province region
33580 33580 9850 Landegem 1 380000 3 127 1 0 1 0 0 0 0 as new 3.588809 51.054637 Flandre-Orientale Flandre
11576 11576 9000 Gent 1 319000 2 89 1 0 1 0 0 2 0 as new 3.714155 51.039713 Flandre-Orientale Flandre
12830 12830 3300 Bost 0 170000 3 140 1 0 1 1 160 2 0 to renovate 4.933924 50.784632 Brabant flamand Flandre
20736 20736 6880 Cugnon 0 270000 4 218 0 0 0 0 3000 4 0 unknown 5.203308 49.802043 Luxembourg Wallonie
11416 11416 9000 Gent 0 875000 6 232 1 0 0 1 0 2 0 good 3.714155 51.039713 Flandre-Orientale Flandre
```
I hot encoded the category features, city, province, region, state of the building:
```
one_hot_state_of_the_building=pd.get_dummies(data.state_of_the_building)
one_hot_city = pd.get_dummies(data.city_name, prefix='city')
one_hot_province = pd.get_dummies(data.province, prefix='province')
one_hot_region=pd.get_dummies(data.region, prefix ="region")
```
Then I added those columns to the pandas dataframe
```
#removing categorical features
data.drop(['city_name','state_of_the_building','province','region'],axis=1,inplace=True)
#Merging one hot encoded features with our dataset 'data'
data=pd.concat([data,one_hot_city,one_hot_state_of_the_building,one_hot_province,one_hot_region],axis=1)
```
I remove the price
```
x=data.drop('price',axis=1)
y=data.price
```
then train test split
```
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=.3)
```
then I train:
```
x_df = DataFrame(x, columns= data.columns)
x_train, x_test, y_train, y_test = train_test_split(x_df, y, test_size=0.15)
#Converting the data into proper LGB Dataset Format
d_train=lgb.Dataset(x_train, label=y_train)
#Declaring the parameters
params = {
'task': 'train',
'boosting': 'gbdt',
'objective': 'regression',
'num_leaves': 10,
'learnnig_rate': 0.05,
'metric': {'l2','l1'},
'verbose': -1
}
#model creation and training
clf=lgb.train(params,d_train,10000)
#model prediction on X_test
y_pred=clf.predict(x_test)
#using RMSE error metric
mean_squared_error(y_pred,y_test)
```
However the RMSE its:
6053845952.2186775
which seems a huge number.
I am not sure what I am doing wrong here
|
How to improve Regression RMSE with LightGBM
|
CC BY-SA 4.0
| null |
2022-08-14T17:07:36.887
|
2022-08-15T09:12:16.050
| null | null |
132417
|
[
"python",
"regression",
"pandas",
"lightgbm"
] |
`mean_squared_error(y_pred,y_test)` is MSE, not RMSE (which would be `mse ** 0.5`). Taking a square root of it yields around 80k, which is not that huge compared to your actual price values - you seem to have around 75% explained variance, which is quite decent.
You can probably improve it further by performing some EDA and dealing with outliers somehow (MSE is outlier sensitive). You should also check for possible highly correlated features, as those inflate you model variance (at a quick glance, you don't use `drop_first` when doing OHE, thus getting redundant columns).
Scaling is not really a must, tree models, including gradient boosting on trees, are rather indifferent to scale.
|
How to use r2-score as a loss function in LightGBM?
|
$R^2$ [is just a rescaling of](https://stats.stackexchange.com/a/250735/232706) mean squared error, [the default loss function](https://lightgbm.readthedocs.io/en/latest/Parameters.html#objective) for LightGBM; so just run as usual. (You could use another builtin loss (MAE or Huber loss?) instead in order to penalize outliers less.)
|
113517
|
1
|
113704
| null |
1
|
59
|
I have the following pandas dataframe:
df_1:
```
User Docs Pref
user1 doc1 m1
user1 doc2 m2
user1 doc3 m1
user1 doc4 m3
user2 doc1 m1
user2 doc2 m2
user3 doc1 m3
user4 doc1 m2
```
I need to get the data frames following:
```
User m1Count m2Count m3Count
user1 2 1 1
user2 1 1 0
user3 0 0 1
user4 0 1 1
```
I tried to use `value_counts` but couldn't to get what I want.
Any help will be appreciated.
```
df = pd.DataFrame(
{
"User": ["user1", "user1", "user1", "user1","user2","user2","user3","user4"],
"Docs": ["doc1", "doc2", "doc3", "doc4", "doc1", "doc2","doc1","doc1"],
"Pref": ["m1", "m2", "m1", "m3", "m1", "m2", "m3", "m2"],
})
```
|
Pandas Dataframe grouping and summarizing
|
CC BY-SA 4.0
| null |
2022-08-14T17:16:53.737
|
2022-08-21T15:57:09.320
|
2022-08-14T18:35:04.727
|
138468
|
138468
|
[
"python",
"pandas",
"dataframe"
] |
You can use `groupby` with `value_counts` and `unstack`:
```
df.groupby("User")["Pref"].value_counts().unstack().fillna(0).astype(int)
```
```
Pref m1 m2 m3
User
user1 2 1 1
user2 1 1 0
user3 0 0 1
user4 0 1 0
```
If you want to clean the column and index names:
```
(
df.groupby("User")["Pref"]
.value_counts()
.unstack()
.fillna(0)
.astype(int)
.rename_axis(None)
.rename_axis(None, axis="columns")
.add_suffix("Count")
)
```
```
m1Count m2Count m3Count
user1 2 1 1
user2 1 1 0
user3 0 0 1
user4 0 1 0
```
|
How to sum values grouped by two columns in pandas
|
[pivot_table](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.pivot_table.html) was made for this:
```
df.pivot_table(index='Date',columns='Groups',aggfunc=sum)
```
results in
```
data
Groups one two
Date
2017-1-1 3.0 NaN
2017-1-2 3.0 4.0
2017-1-3 NaN 5.0
```
Personally I find this approach much easier to understand, and certainly more pythonic than a convoluted groupby operation. Then if you want the format specified you can just tidy it up:
```
df.fillna(0,inplace=True)
df.columns = df.columns.droplevel()
df.columns.name = None
df.reset_index(inplace=True)
```
which gives you
```
Date one two
0 2017-1-1 3.0 0.0
1 2017-1-2 3.0 4.0
2 2017-1-3 0.0 5.0
```
|
113549
|
1
|
113559
| null |
1
|
74
|
I am facing an issue trying to improve my model for object detection, this is something which I have been facing for quite a few days. I have tried to improve my model by fine tuning and also changed the split to 80-20 (5399 for train, 1499 for val) to include more data for my validation set but still no luck in trying to improve the mAP (mean average precision) value of my model. The model config I have so far:
```
model_4 = Sequential()
model_4.add(Conv2D(16, (3, 3), activation='relu', input_shape=(300, 300, 3)))
# model_4.add(RandomFlip(mode='horizontal_and_vertical', seed=None))
model_4.add(BatchNormalization())
model_4.add(MaxPool2D(pool_size=(2, 2)))
model_4.add(Conv2D(32, (3, 3), activation='relu'))
model_4.add(BatchNormalization())
model_4.add(MaxPool2D(pool_size=(2, 2)))
model_4.add(Conv2D(64, (3, 3), activation='relu'))
model_4.add(BatchNormalization())
model_4.add(MaxPool2D(pool_size=(2, 2)))
model_4.add(Conv2D(64, (3, 3), activation='relu'))
model_4.add(BatchNormalization())
model_4.add(MaxPool2D(pool_size=(2, 2)))
model_4.add(Conv2D(128, (3, 3), activation='relu'))
model_4.add(MaxPool2D(pool_size=(2, 2)))
model_4.add(Flatten())
model_4.add(Dropout(0.35))
model_4.add(Dense(256, activation='relu', kernel_regularizer=regularizers.l2(0.1)))
model_4.add(Dropout(0.35))
model_4.add(Dense(4))
model_4.compile(loss='mse', optimizer=Adam(learning_rate=0.0001), metrics=[tfr.keras.metrics.MeanAveragePrecisionMetric()])
model_4.summary()
```
My approach is to try and improve as much as I can without a pretrained model/weights. I trianed with batch size as 64, no data augmentation. I was hoping to at least reach 85% or higher but that may be a bit of a stretch for me at this point.
My graphs for loss and mAP:
I reached a mAP of 79 for my train and val set which is quite close
[](https://i.stack.imgur.com/9YvUX.png)
[](https://i.stack.imgur.com/ctvzH.png)
Any help would be appreciated as to how I can go forward into making the model perform better. I am sure there has to be something that is causing a bottleneck in the model performance. Thanks a lot.
|
How can I improve my current model to get a higher mAP value? (Stuck at 79~78)
|
CC BY-SA 4.0
| null |
2022-08-16T03:03:44.347
|
2022-08-16T08:53:12.003
|
2022-08-16T03:44:43.497
|
138954
|
138954
|
[
"machine-learning",
"deep-learning",
"regression",
"machine-learning-model",
"object-detection"
] |
There are several things to consider:
- Increase the number of convolution blocks.
- Use residual blocks, as explained here
- Use different activation functions, such as Leaky ReLU, Mish, Swish, etc.
- To connect the feature map of feature extractors to the dense layer, use the Global pooling layer instead of flattening.
- Try hyperparameter tuning with already available keras tuners.
- Another approach would be to use an already fine-tuned model using Transfer Learning. Take a look at the EfficentNets.
|
How can I improve my model on a very very small dataset?
|
From what you say, I think you should start with checking three options:
I) Ordinary least squares (OLS): Just run a „normal“ linear regression. This will not yield great predictions, but you could view the model as a causal one, if you can assume a linear relation between $y$ and $x$. When you have five predictors and 35 observations, you have a total of 29 degrees of freedom which is „okay“. When you estimate the model in „levels“, so just values as they are, you can directly interprete the estimated coefficients as marginal effects. E.g. a model $y=\beta_0+\beta_1 x + u$, tells you that when $x$ increases by one unit, $y$ changes by $\beta_1$ units, just like a linear function.
II) You can use Lasso/Ridge/Elastic Net: All of them are linear-like models with a penalty term to „shrink“ $x$ variables if they are „not useful“. This works like automatic feature selection if you like to say so. There is a great package by Hastie et al. for R. You can find it [here](https://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html). It is also available for Python.
III) Maybe (!) boosting could be an option as well: You would (likely) need to do some feature selection/engineering on your own. But Boosting is able to work with a small number of observations, with highly correlated features, and it often works well with highly non-linear problems. There are LightGBM or Catboost as possible Python packages. Find some minimal examples [here](https://github.com/Bixi81/Python-ml).
With II) and III) you will find that you are not really able to „set aside“ a number of observations to check if your models work (because you don’t have much data). You could use cross validation (Ch. 5 in ISL, link below), but you need to see how it works. Instead of going for a predictive model, I tend to say that you might be better off starting with a „causal-like“ OLS model. With OLS you do not really need a „test-set“. OLS is very robust.
Since you seem to be new to statistical modeling, you might benefit from having a look at „[Introduction to Statistical Learning](http://faculty.marshall.usc.edu/gareth-james/ISL/)“ (Chapters 3 and 6 in particular). The [PDF is online](http://faculty.marshall.usc.edu/gareth-james/ISL/ISLR%20Seventh%20Printing.pdf) and there is code for the Labs in Python and R. The advanced book would be „Elements of Statistical Learning“.
Good luck with your project!
|
113570
|
1
|
116879
| null |
1
|
230
|
I'm trying to use CNN for time series regression in python. I have 9 elements in each time step (from sensor readings) and the output (target/reference) is 4 elements.
```
Input Shape = (time steps, 9)
Output Shape = (time steps, 4)
```
Based on papers I should use rolling windows, such as:
[](https://i.stack.imgur.com/OI7Hy.png)
I don't understand how could I implement that. Should I convert the input to as follows?
```
Input Shape = (Time Steps, Sliding Windows Length, 9)
```
The Model is:
```
####################################################################################################################
# Define ANN Model
# define two sets of inputs
acc = layers.Input(shape=(3,1,))
gyro = layers.Input(shape=(3,1,))
# the first branch operates on the first input
x = Conv1D(256, 1, activation='relu')(acc)
x = Conv1D(128, 1, activation='relu')(x)
x = Conv1D(64, 1, activation='relu')(x)
x = MaxPooling1D(pool_size=3)(x)
x = Model(inputs=acc, outputs=x)
# the second branch opreates on the second input
y = Conv1D(256, 1, activation='relu')(gyro)
y = Conv1D(128, 1, activation='relu')(y)
y = Conv1D(64, 1, activation='relu')(y)
y = MaxPooling1D(pool_size=3)(y)
y = Model(inputs=gyro, outputs=y)
# combine the output of the three branches
combined = layers.concatenate([x.output, y.output])
# combined outputs
z = Bidirectional(LSTM(128, dropout=0.25, return_sequences=False,activation='tanh'))(combined)
z = Reshape((256,1),input_shape=(128,))
z = Bidirectional(LSTM(128, dropout=0.25, return_sequences=False,activation='tanh'))(combined)
#z = Dense(10, activation="relu")(z)
z = Flatten()(z)
z = Dense(4, activation="linear")(z)
model = Model(inputs=[x.input, y.input], outputs=z)
model.compile(loss='mse', optimizer = tf.keras.optimizers.Adam(learning_rate=0.01),metrics=['accuracy','mse'],run_eagerly=True) #, callbacks=[tensorboard]
model.summary()
Model: "model_2"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) [(None, 3, 1)] 0 []
input_2 (InputLayer) [(None, 3, 1)] 0 []
conv1d (Conv1D) (None, 3, 256) 512 ['input_1[0][0]']
conv1d_3 (Conv1D) (None, 3, 256) 512 ['input_2[0][0]']
conv1d_1 (Conv1D) (None, 3, 128) 32896 ['conv1d[0][0]']
conv1d_4 (Conv1D) (None, 3, 128) 32896 ['conv1d_3[0][0]']
conv1d_2 (Conv1D) (None, 3, 64) 8256 ['conv1d_1[0][0]']
conv1d_5 (Conv1D) (None, 3, 64) 8256 ['conv1d_4[0][0]']
max_pooling1d (MaxPooling1D) (None, 1, 64) 0 ['conv1d_2[0][0]']
max_pooling1d_1 (MaxPooling1D) (None, 1, 64) 0 ['conv1d_5[0][0]']
concatenate (Concatenate) (None, 1, 128) 0 ['max_pooling1d[0][0]',
'max_pooling1d_1[0][0]']
bidirectional_1 (Bidirectional (None, 256) 263168 ['concatenate[0][0]']
)
flatten (Flatten) (None, 256) 0 ['bidirectional_1[0][0]']
dense (Dense) (None, 4) 1028 ['flatten[0][0]']
==================================================================================================
Total params: 347,524
Trainable params: 347,524
Non-trainable params: 0
```
|
Use CNN for time series regression | How to implement sliding window?
|
CC BY-SA 4.0
| null |
2022-08-16T11:06:27.220
|
2022-12-08T16:27:37.703
| null | null |
139239
|
[
"python",
"keras",
"time-series"
] |
I wrote this code to solve this problem. This code requires windows size and stride value.
```
def load_dataset(gyro_data, acc_data, ori_data, window_size, stride):
x_gyro = []
x_acc = []
x_ori = []
for idx in range(0, gyro_data.shape[0] - window_size - 1, stride):
x_gyro.append(gyro_data[idx + 1: idx + 1 + window_size, :])
x_acc.append(acc_data[idx + 1: idx + 1 + window_size, :])
x_ori.append(mag_data[idx + 1: idx + 1 + window_size, :])
x_gyro = np.reshape(
x_gyro, (len(x_gyro), x_gyro[0].shape[0], x_gyro[0].shape[1]))
x_acc = np.reshape(
x_acc, (len(x_acc), x_acc[0].shape[0], x_acc[0].shape[1]))
x_ori = np.reshape(x_ori, (len(x_ori), x_ori[0].shape[0]))
return [x_gyro, x_acc], [x_ori]
```
|
Input shape for simpler time series in LSTM+CNN
|
For LSTM in tensorflow the tensor has three inputs. So, let's assume we have:
[samples, time steps, features]. This means that you have n number of samples, and each sample is divided in m time steps. All of the samples have the same number and the same features.
So in your case n_steps, n_length = 4, 32 means that n_steps are going to be taken in the data with 32 samples, meaning each 4samples are fed into the LSTM at one single point of time. Or 8 different subsets are going to be fed into the LSTM. LSTM usually takes subsets(not single row-samples!)
According to your reshape (1000,100,1) you say that you have 1000 samples(imagine them as subsets of the dataset) and each of these 1000 subsets you divide into 100 time steps. This means that in each of the 1000 subsets, you must have at least 100 samples or a number divisible by 100, otherwise, it won't work. The third parameter "1" is referring to the number of features in your dataset/subset. When you work with accelerometer data I don't believe that you work with only one feature. My guessing is that there you need to have at least 3 features(x,y, and z) or maybe more.
Please check their documentation site: [https://www.tensorflow.org/tutorials/sequences/recurrent](https://www.tensorflow.org/tutorials/sequences/recurrent)
|
113573
|
1
|
113976
| null |
0
|
100
|
I have access to a hpc node, of 3 GPU and maximum of 38 CPU. I have a transformer model which I run of a single GPU at the moment, I want to utilize all the GPUs and CPUs.
I have seen couple of tutorial on Dataparrallel and DistributedDataParallel. They only mentioned how to use multiple GPUs.
My questions are:
- Do I use Dataparallel or DistributedDataParallel
- How do I adapt my code run on the GPUs and CPUs simultaneously. Perhaps if I can get a tutorial link.
- How to do I get the device ids.
|
Running Model on both GPUs and CPUs
|
CC BY-SA 4.0
| null |
2022-08-16T11:41:48.293
|
2022-09-02T04:16:42.200
|
2022-08-16T11:51:54.540
|
67275
|
67275
|
[
"deep-learning",
"pytorch",
"transformer",
"gpu",
"hpc"
] |
- I did use DistributedDataParallel according to PyTorch documentation DataParallel is usually slower than DistributedDataParallel therefore it is recommended since DistributedDataParallel works for both single- and multi-machine training.
- Tutorial Comparison between DataParallel and DistributedDataParallel
- Another Tutorial Multi-GPU Examples
- Solution LITDataScience's answer - How to find the nvidia GPU IDs for pytorch cuda run setup?
|
How to train more models on 2 GPUs with Keras?
|
I would just create two separate scripts with one set of models that target one gpu and the other set of models target the other gpu. Then run the scripts as separate processes.That would easily get around Python's GIL.
|
113618
|
1
|
113663
| null |
0
|
82
|
I have a multiclass text classification problem and I've tried different solutions and models, but I was not satisfied with the results.
So I've decided to use GloVe ( Global Vectors for Word Representation ) , but somehow all the models performed even worse.
So my question is, is it possible that NLP models perform even worse by the use of some word embeddings models like GloVe or FastText? Or did I just made a bad implementation?
The code is given below:
```
embedding_model = {}
f = open(r'../../langauge_detection/glove.840B.300d.txt', "r", encoding="utf8")
for line in f:
values = line.split()
word = ''.join(values[:-300])
coefs = np.asarray(values[-300:], dtype='float32')
embedding_model[word] = coefs
f.close()
def sent2vec(s):
words = str(s).lower()
words = word_tokenize(words)
words = [w for w in words if not w in stop_words]
words = [w for w in words if w.isalpha()]
M = []
for w in words:
try:
M.append(embedding_model[w])
except:
continue
M = np.array(M)
v = M.sum(axis=0)
if type(v) != np.ndarray:
return np.zeros(300)
return v / np.sqrt((v ** 2).sum())
X_train, X_test, y_train, y_test = train_test_split(df.website_text, df.industry, test_size=0.2, random_state=42)
x_train_glove = [sent2vec(x) for x in tqdm(X_train)]
x_test_glove = [sent2vec(x) for x in tqdm(X_test)]
x_train_glove = np.array(x_train_glove)
x_test_glove = np.array(x_test_glove)
from sklearn.linear_model import SGDClassifier
sgd = SGDClassifier(random_state=42)
sgd.fit(x_train_glove, y_train)
```
|
Is it normal for a model to perform worse with the use of word embeddings?
|
CC BY-SA 4.0
| null |
2022-08-17T16:45:25.627
|
2022-08-18T15:48:54.253
| null | null |
139110
|
[
"machine-learning",
"nlp",
"word-embeddings",
"fasttext"
] |
There are various cases where a problem works better with a simpler representation of text than word embeddings:
- Data size: if it's too small, the model may overfit because the embeddings give too much precision. Generally embeddings are more subtle so they require more data diversity.
- The selected embeddings are not suitable for the data, e.g. general text embeddings may not work well with scientific texts, social media data, etc. Embeddings are trained on some data, so if this training data is too different from the data for the application then it won't give good results.
Generally one should never assume that a method is always better than another, as per the [No Free Lunch](https://en.wikipedia.org/wiki/No_free_lunch_theorem) theorem.
|
Getting low accuracy on keras pretrained word embeddings example
|
The code has been changed to remove headers. See comment on github:
"Newsgroups message contains header like 'Newsgroups: alt.atheism', which inflates the accuracy to 0.95 (2 epochs).
After removing the header, the val accuracy is 0.47 (2 epochs) and 0.71 (10 epochs)."
[https://github.com/fchollet/keras/pull/5585](https://github.com/fchollet/keras/pull/5585)
This confused me for days!
|
113633
|
1
|
113650
| null |
0
|
44
|
I wanna implement the back-propagation algorithm in python with the next code
```
class MLP(object):
def __init__(self, num_inputs=3, hidden_layers=[3, 3], num_outputs=2):
self.num_inputs = num_inputs
self.hidden_layers = hidden_layers
self.num_outputs = num_outputs
layers = [num_inputs] + hidden_layers + [num_outputs]
weights = []
bias = []
for i in range(len(layers) - 1):
w = np.random.rand(layers[i], layers[i + 1])
b=np.random.randn(layers[i+1]).reshape(1, layers[i+1])
weights.append(w)
bias.append(b)
self.weights = weights
self.bias = bias
activations = []
for i in range(len(layers)):
a = np.zeros(layers[i])
activations.append(a)
self.activations = activations
dW=[]
db=[]
for i in range(len(layers)-1):
derW=np.zeros((layers[i], layers[i+1]))
derb=np.zeros((layers[i+1])).reshape(1, layers[i+1])
dW.append(derW)
db.append(derb)
self.dW=dW
self.db=db
def forward_propagate(self, inputs):
activations = inputs
self.activations[0] = activations
for i, w in enumerate(self.weights):
activations = self._sigmoid((np.matmul(activations, w)+self.bias[i))
self.activations[i+1] = activations.T
return activations
def back_propagate(self,error):
for i in reversed(range(len(self.dW))):
activations=self.activations[i+1]
delta = np.multiply(self._sigmoid(activations),error)
print("This is delta: {} ".format(delta))
current_activations=self.activations[i]
current_activations = current_activations.reshape(current_activations.shape[0],-1)
print("This is the current activations: {} ".format(current_activations))
self.dW[i] = 1/delta.shape[0]*np.dot(current_activations,delta)
def train(self, inputs, targets, epochs, learning_rate):
for i in range(epochs):
sum_errors = 0
for j, input in enumerate(inputs):
target = targets[j]
output = self.forward_propagate(input)
error = target - output
self.back_propagate(error)
def _sigmoid(self, x):
y = 1.0 / (1 + np.exp(-x))
return y
```
So I created the next dummy data in order to verify everything is correct
```
items = np.array([[random()/2 for _ in range(2)] for _ in range(1000)])
targets = np.array([[i[0] + i[1]] for i in items])
mlp = MLP(2, [5], 1)
mlp.train(items, targets, 2, 0.1)
```
but when I run the code I have the next error
```
ValueError: shapes (2,1) and (5,1) not aligned: 1 (dim 1) != 5 (dim 0)
```
I understand the error because when I printed the delta and current activations values I have the next ones:
```
This delta: [[-0.67139682]]
This is the current activations: [[ 0.11432486]
[-0.38246416]
[-0.85207878]
[ 0.73210993]
[ 0.76603196]]
This is delta: [[-1.45663835]
[-1.2793182 ]
[-0.76875725]
[-0.90048138]
[-0.86253739]]
This is the current activations: [[0.08248608]
[0.12631125]]
```
So what I really want is that the current activation
`[[-0.67139682]]`
multiply with this delta value
```
[[0.08248608]
[0.12631125]]
```
and this current activations
```
[[ 0.11432486]
[-0.38246416]
[-0.85207878]
[ 0.73210993]
[ 0.76603196]]
```
multiply with this delta value
```
[[-1.45663835]
[-1.2793182 ]
[-0.76875725]
[-0.90048138]
[-0.86253739]]
```
but I don't know how to do that. Any help?
|
Back propagation matrix shape error using Python
|
CC BY-SA 4.0
| null |
2022-08-18T03:22:02.683
|
2022-08-21T22:09:39.143
| null | null |
139355
|
[
"machine-learning",
"python",
"neural-network",
"backpropagation",
"matrix"
] |
I believe you should change:
```
self.dW[i] = 1/delta.shape[0]*np.dot(current_activations,delta)
```
to
```
self.dW[i] = 1/delta.shape[0]*np.dot(current_activations,delta.T)
```
in the back propagation function. This will help you to avoid the error.
|
Backpropgating error to emedding matrix
|
An embedding layer is in fact a linear layer. It maps the input, using a matrix multiplication, to the output, without any activation function after the multiplication. Therefore, the backpropagation is exactly as you would do with linear layer.
Why don't we just call it linear layer, then?
At theory level, an embedding layer performs a matrix multiplication to the input. However, in practice, the coded implementation is slightly different. This is due to the fact that the input, as a category, is incoded in a one-hot way, and the matrix multiplication by a one-hot encoded vector is as easy as a look-up, so there is no need to multiply the whole matrix.
|
113646
|
1
|
113652
| null |
1
|
34
|
I'm using decision tree classification for a classification problem. I have preprocessed the data, train/test split it, and run a model with cross validation before testing it. The steps I followed for preprocessing are outlined below:
- Removed some occurences (rows) which aren't usable
- Transformed some of the columns by taking nth-root to remove skew (n is different for each column, I plotted the data and did whatever looked like it reduced the skew most)
- Train/test split the data
- I fit OneHotEncoder() and StandardScaler() to the training data
- I applied the transformations in step 4 to both the training and test data
My questons are as follows:
- Are my steps correct? In particular, is it correct to 'root transform' the data before train/test split, or does that lead to data leakage?
- When I want to apply my model to new data (after testing etc.) does that new data have to undergo identical preprocessing? e.g. fit to the train set then apply it to the new data and root transformations of the same nth-root.
Thanks in advance
|
Do the preprocessing steps for new data need to be identical to the steps for train/test data?
|
CC BY-SA 4.0
| null |
2022-08-18T10:38:48.603
|
2022-08-18T12:08:24.247
| null | null |
139149
|
[
"machine-learning",
"preprocessing"
] |
- You are correct, as Evolving_Richie commented.
- When you want to apply you model, you have to follow the same process you did when training. However, you only need to transform in the same way, never fit! So the process would be: Remove occurrences/samples (you are accepting that you won't have predictions for these values, if not, change you approximation), transform columns, transform OneHotEncoder() and StandardScaler(). Of course we are not splitting data into training and test because we are on deployment, and all our data is test.
|
Do we need the Preprocessing step on both Test and Train data sets?
|
There are a few things that you need to be careful with here.
You can do certain things when preprocessing data or performing data augmentation that can be applied across an entire dataset (train and validation). The main idea is not to allow the model to gain insight from the test data.
---
### Time-series example
Missing data can be managed in many ways, such as simple imputation (filling the gaps). This is very common in time-series data. In your training data, you can fill the gaps using the previous value, the following value, the average of the data or something like the moving average. Where you must be careful is with violating the information flow through time. For example, in your test data, you should not fill gaps using a method that looks at data points in front of the emtpy time slot. This is because, at that point in time, you will not be able to do the same as you shouldn't know the future values.
### Image data example
Looking instead at image data, there are data-preprocessing steps such as normalisation. This means just scaling the image pixel values to a range like $[-1, 1]$. To do this, you must compute the population mean and variance, which you then use to perform the scaling. When computing these two statistics, it is important not to include the test data. The reason is that you would be leaking information the dataset that is then used to train a model. Your technically knows things that it shouldn't; in this case, clues regarding the mean and variance of the target distribution.
---
People might also consider "missing data" to include imbalanced datasets; i.e. there are cases that you know of, but just don't appear in your dataset very often. There are some tricks to help with this, such as stratified sampling or cross-validation. The optimal solution would, of course, be to gather a dataset that more closely represents the problem at hand.
|
113681
|
1
|
113696
| null |
2
|
48
|
We use data $(\boldsymbol{x_1}, \boldsymbol{x_2},\ldots, \boldsymbol{x_n},\boldsymbol{y})$ to improve the predictability of a physical model $f(x_1,x_2,\ldots,x_n)$ that was implemented by domain experts.
Let $\hat{\boldsymbol{y}}=f(\boldsymbol{x_1}, \boldsymbol{x_2},\ldots, \boldsymbol{x_n})$.
Originally, we decided to fit the errors $\boldsymbol{e}=\boldsymbol{y}-\hat{\boldsymbol{y}}$ with a statistical model, $g$, so the improved model shall be $g(x_1,x_2,\ldots,x_n) + f(x_1,x_2,\ldots,x_n)$.
Later I found that adding $\hat{y}$ as a feature can train a better $g$ for fitting the error $e$, so the final model was changed to:
$$g(x_1,x_2,\ldots,x_n,\hat{y}) + f(x_1,x_2,\ldots,x_n)$$
or simply
$$h(x_1,x_2,\ldots,x_n)=g\left(x_1,x_2,\ldots,x_n,f(x_1,x_2,\ldots,x_n)\right) + f(x_1,x_2,\ldots,x_n)$$
Does this approach have any issue? It seems good to me because $\hat{y}$ is just an engineered feature for training $g$. Posts like [https://stats.stackexchange.com/questions/404809/is-it-advisable-to-use-output-from-a-ml-model-as-a-feature-in-another-ml-model](https://stats.stackexchange.com/questions/404809/is-it-advisable-to-use-output-from-a-ml-model-as-a-feature-in-another-ml-model) also support my view.
|
Use model output as feature to predict model error in boosting
|
CC BY-SA 4.0
| null |
2022-08-19T14:45:34.187
|
2022-08-20T06:14:24.587
| null | null |
67794
|
[
"machine-learning"
] |
What do you mean by $f$ is a "physical model"? If you mean something like, "Given some $x$, domain experts then use their experience/discretion to estimate $f(x)$", which you then feed into some statistical model $g$, then I see no issue at all here.
(E.g. $x$ is some weather data, we then ask some weather experts their thoughts on the chance of rain tomorrow $f(x)$, and then use that as features for some machine learning model.)
In fact, that is simply feature engineering. If $g$ is a flexible ML model like neural networks, forests, etc, then worst case these features don't contribute anything but should not really degrade performance. If $g$ is a rigid statistical model like OLS or something, then you might run into some various model-specific issues like multi-collinearity etc. Hard to say without knowing what $g$ is.
Now if $f$ is also a statistical model, then you might run into some issues with overfitting. For example, training a random forest to get $f(x)$ then using both $x$ and $f(x)$ as features in a neural network. But you can work around this with some proper cross-validation and data splitting.
|
Predictive output with your own model built
|
What you are proposing is a [heuristic](https://en.wikipedia.org/wiki/Heuristic_(computer_science)) method, because you define the rules manually in advance. From a Machine Learning (ML) point of view the "training" is the part where you observe some data and decide which rules to apply, and the "testing" is when you run a program which applies these rules to obtain a predicted label. As you correctly understood, the testing part should be applied to a test set made of unseen instances. The instances in the test set should also be manually labelled (preferably before performing the testing in order to avoid any bias), so that you can evaluate your method (i.e. calculate the performance).
Technically you're not using any ML approach here, since there is no part where you automatically train a model. However heuristics can be useful, in particular they are sometimes used as a baseline to compare ML models against.
---
[addition following comment]
>
I think most of common pre-processing approach requires to convert text into lower case, but a word, taken in different contest, can have a different weight.
This is true for a lot of tasks in NLP (Natural Language Processing) but not all of them. For example for tasks related to capturing an author's writing style (stylometry) one wouldn't usually preprocess text this way. The choice of the representation of the text as features depends on the task so the choice is part of the design, there's no universal method.
>
how to train a model which can 'learn' to consider important upper case words and punctuation?
In traditional ML (i.e. statistical ML, as opposed to Deep Learning), this question is related to feature engineering, i.e. finding the best way to represent an instance (with features) in relation with the task: if you think it makes sense for your task to have specific features to represent these things, you just add them: for instance you can add a boolean feature which is true if the instance contains at least one uppercase word, a numeric feature which represents the number of punctuation signs in the instance, etc.
Recent ML packages propose standard ways to represent text instances as features and it's often very convenient, but it's important to keep in mind that it's not the only way. Additionally nowadays Deep Learning methods offer ways to bypass feature engineering so there's a bit of a tendency to forget about it, but imho it's an important part of the design, if only to understand how the model works.
|
113771
|
1
|
113778
| null |
0
|
31
|
I would like to cluster multidimensional time series using k-means and Ward's method. My base dataset has 4 columns (features) and each of them is a time series of 288 values. So one "datapoint" has $4*288=1152$ entries (dimensions). I have 100 datapoints that I want to cluster.
Depending on the setup, it might be possible that 1 or 2 of the 4 columns have 0 values for 288 time series values and for all of the 100 datapoints that I want to cluster. Now my question is, if and how these 0-columns affect the results of the clustering with k-means and Ward's method? So let's say that actually one datapoint has only 2 features with 288 values. Does it make a difference if I use $2*288=576$ dimensions for one record compared to using $4*288=1152$ dimensions for one record when out of the 4 dimensions in the big array 2 have 0-values for all entries?
|
Do 0-columns affect the results of time series clustering when using k-means and Ward's method?
|
CC BY-SA 4.0
| null |
2022-08-23T10:25:31.383
|
2022-08-23T13:19:37.150
|
2022-08-23T11:50:32.370
|
105469
|
105469
|
[
"clustering",
"k-means"
] |
If you have got, say, $100 \times 576$ (i.e., $100$ rows/data points and $576$ columns which represent the linearization of $2$ time series) values that are $0$ and you use a variance-based optimization approach, then including such values will affect the resulting variance, since variance is based on the mean of your observations.
However, assuming that you use a non-randomized clustering procedure, the data points will fall inside the same clusters either including or excluding those 0 values; they will simply have different variances, but those variances are penalized by the same quantity in all the data points (i.e., the mean will be nearer to $0$ including those $0$ values).
If you want to use a randomized procedure, I would suggest you use the same random seed for both experiments to check the result by inspecting some data points in both experiments and see in which cluster they fall.
|
K-Means clustering - What to do if a cluster has 0 elements?
|
This is mostly an issue with really bad initialization (random vector generation as well as random labeling are stupid, don't use it - choose k points wth sampling, or k-means++) and with data where k-means doesn't work well at all. So if this happens, you know the results won't be good!
Either way, the standard and straightforward solution is simple: use the previous mean if a cluster becomes empty. It could be assigned points later again. And if it doesn't, well, then the cluster is empty. No surprises here, no infinite loops, convergence issues, etc.
|
113812
|
1
|
113813
| null |
0
|
67
|
I'm doing a Data Science project, and I'm on the stage of cleaning categorical features. I've been researching, and it seems that imputing the mean or median can change the distribution. Therefore, a better way would be to use logistic regression or any other model to predict null values in categorical features.
In [this post](https://www.analyticsvidhya.com/blog/2021/04/how-to-handle-missing-values-of-categorical-variables/), the author explains how to use logistic regression to impute null values in a binomial categorical feature. However, the categorical features that I'm using have multiple possible values.
Do you know of any approach to solve this and get an accurate imputation of null values on multi-categorical features?
Thanks!
|
Is it possible to implement logistic regression (or any other ML method) to impute null values in a categorical feature with multiple values?
|
CC BY-SA 4.0
| null |
2022-08-24T12:21:54.007
|
2022-08-24T12:39:24.717
| null | null |
135386
|
[
"machine-learning",
"logistic-regression",
"categorical-data",
"data-imputation"
] |
I am not saying this is a good idea.
You could use multinomial models (logistic, trees). The test you posed "get an accurate imputation" is hard. Given the missing values are unknown, you can get a probabilistic answer. How accurate is a function of the data. And now you have 2 models that you need to prepare and monitor.
A bigger question - can the features be null during scoring or is this a training issue only? If the model is in production and receives missing values, you need to run the imputation model scoring to determine what value to place in the feature before scoring with the model.
Hopefully a null indicator variable is always getting set in your data. And you have already researched the missing values to see if there is a pattern, if there is meaning to the missing, why they are missing, subject matter expert rules that can replace, etc. [Are these missing at random or missing not at random or ...?](https://en.wikipedia.org/wiki/Missing_data#Types)
|
How to Keep Missing Values in Ordinal Logistic Regression
|
This problem refers to different missing data mechanisms.
When it comes to missing data, there are three different types of missing data mechanism:
- Missing completely at random
- Missing at random
- Missing not at random
For the cases you mentioned in your problem are:
>
(1) missing values where the viewer skipped a question because it wasn’t applicable due to skip logic from a prior question
This kind of missing values are missing due to the `Missing not at random` mechanism. For this kind of missing values, removing it can produce a bias in the model. Therefore, you should not delete it. You can try setting a value indicating the missing.
>
(2) missing values because the viewers missed it
This kind of missing values are missing due to the `Missing completely at random` mechanism. You can just delete this kind of missing values without influencing your model.
|
113817
|
1
|
113836
| null |
1
|
30
|
I am trying to make a first analysis on the interest of people feedback from their emails. For a first analysis I made with a simple wordcount to know the key words.
I am facing the following problem: some customers give very short feedback and others a single customer gives very long feedback so the wordcount mechanism that simply counts words gives more weight to the customer who writes more, which may not be the most important.
i.e
`customer_1`: I would like to know the normative about Covid, cause I m covid vaccinated... covid ..covid (2000 words) # word covid appear 13 times
`customer_2`: I m worry about price (100 words)
`customer_3`: Something about pices too(150 words)
If we just follow the aproach of Word count, the results are unbalanced towards the person who writes the most. how can this be avoided ?
In ML, in order to avoid that some attributes have more weight than others, they are normalised, how would this be in NLP ?
|
Normalize summary of customer feedback text / word-cloud /word-count
|
CC BY-SA 4.0
| null |
2022-08-24T14:25:55.977
|
2022-08-25T09:54:33.670
| null | null |
64726
|
[
"nlp"
] |
You can apply text classification with Bert.
It would give a classification, whatever the message length is.
Therefore, you can use multi-class text classification, for instance:
[https://huggingface.co/palakagl/Roberta_Multiclass_TextClassification?text=I+love+AutoTrain+%F0%9F%A4%97](https://huggingface.co/palakagl/Roberta_Multiclass_TextClassification?text=I+love+AutoTrain+%F0%9F%A4%97)
To implement it, here are several tutorials:
[https://www.kaggle.com/code/thebrownviking20/bert-multiclass-classification](https://www.kaggle.com/code/thebrownviking20/bert-multiclass-classification)
[https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613](https://towardsdatascience.com/multi-class-text-classification-with-deep-learning-using-bert-b59ca2f5c613)
[https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f#:~:text=There%20are%20two%20different%20BERT,hidden%20size%2C%20and%20340%20parameters](https://towardsdatascience.com/text-classification-with-bert-in-pytorch-887965e5820f#:%7E:text=There%20are%20two%20different%20BERT,hidden%20size%2C%20and%20340%20parameters).
|
Text summarization with limited number of words
|
You sure can,
for example in [latent semantic analysis](https://en.wikipedia.org/wiki/Latent_semantic_analysis) you can fixate number of topics (which is actually size of the decomposition matrix) beforehand.
|
113826
|
1
|
113831
| null |
0
|
1008
|
I am concerned with a single column (`fruit`) from my `df`:
```
| fruit |
| --------------------|
| apple, orange |
| banana |
| grapefruit, orange |
| apple, banana, kiwi |
```
I want to plot the values from `fruit` to a pie chart to get a visual representation of the distribution of each individual fruit
I run: `df.plot(kind='pie', y='fruit')`
But this gives a `TypeError`: `'<' not supported between instances of 'str' and 'int'`
I have read: [https://stackoverflow.com/questions/20449427/how-can-i-read-inputs-as-numbers](https://stackoverflow.com/questions/20449427/how-can-i-read-inputs-as-numbers)
But I can't see how it helps solve my problem
Any help much appreciated!
|
How to plot categorical variables with a pie chart
|
CC BY-SA 4.0
| null |
2022-08-24T20:18:24.810
|
2023-02-20T15:38:04.900
|
2022-08-24T22:04:48.377
|
139067
|
139067
|
[
"pandas",
"visualization",
"dataframe",
"matplotlib",
"distribution"
] |
You may want first to try to count the number of occurrences of each string inside the column, and from then you have only to plot with whatever kind of plot you want.
```
df = pd.DataFrame({"fruit":["apple, orange", "banana", "grapefruit, orange", "apple, banana, kiwi"]})
df.fruit.str.get_dummies(sep = ",").sum().plot.pie();
```
|
Plot Two Categorical Variables
|
Well, there are a few ways to do the job. Here are some I thought of:
- Scatterplots with noise:
Normally, if you try to use a scatter plot to plot two categorical features, you would just get a few points, each one containing a lot of instances from the data. So, to get a sense of how many there really are in each point, we can add some random noise to each instance:
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# This is to encode the data into numbers that can be used in our scatterplot
from sklearn.preprocessing import OrdinalEncoder
ord_enc = OrdinalEncoder()
enc_df = pd.DataFrame(ord_enc.fit_transform(df), columns=list(df.columns))
categories = pd.DataFrame(np.array(ord_enc.categories_).transpose(), columns=list(df.columns))
# Generate the random noise
xnoise, ynoise = np.random.random(len(df))/2, np.random.random(len(df))/2 # The noise is in the range 0 to 0.5
# Plot the scatterplot
plt.scatter(enc_df["Playing_Role"]+xnoise, enc_df["Bought_By"]+ynoise, alpha=0.5)
# You can also set xticks and yticks to be your category names:
plt.xticks([0.25, 1.25, 2.25], categories["Playing_Role"]) # The reason the xticks start at 0.25
# and go up in increments of 1 is because the center of the noise will be around 0.25 and ordinal
# encoded labels go up in increments of 1.
plt.yticks([0.25, 1.25, 2.25], categories["Bought_By"]) # This has the same reason explained for xticks
# Extra unnecessary styling...
plt.grid()
sns.despine(left=True, bottom=True)
```
[](https://i.stack.imgur.com/vNk59.png)
2. Scatterplots with noise and hues:
Instead of having both axis being feature we can have the $x$ axis be one feature and
the $y$ axis be random noise. Then, to incorporate the other feature, we can "colour
in" instances based on the other feature:
```
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
# Explained in approach 1
from sklearn.preprocessing import OrdinalEncoder
ord_enc = OrdinalEncoder()
enc_df = pd.DataFrame(ord_enc.fit_transform(df), columns=list(df.columns))
categories = pd.DataFrame(np.array(ord_enc.categories_).transpose(), columns=list(df.columns))
xnoise, ynoise = np.random.random(len(df))/2, np.random.random(len(df))/2
sns.relplot(x=enc_df["Playing_Role"]+xnoise, y=ynoise, hue=df["Bought_By"]) # Notice how for hue
# we use the original dataframe with labels instead of numbers.
# We can also set the x axis to be our categories
plt.xticks([0.25, 1.25, 2.25], categories["Playing_Role"]) # Explained in approach 1
# Extra unnecessary styling...
plt.yticks([])
sns.despine(left=True)
```
[](https://i.stack.imgur.com/gBGcJ.png)
- Catplots with hues:
Finally, we can use catplots, and colour in fractions of it based on the other feature:
```
import seaborn as sns
import matplotlib.pyplot as plt
%matplotlib inline
sns.histplot(binwidth=0.5, x="Playing_Role", hue="Bought_By", data=df, stat="count", multiple="stack")
```
[](https://i.stack.imgur.com/PbajF.png)
|
113827
|
1
|
113883
| null |
0
|
26
|
I am a student who has some limited experience with keras, and for a new project recently decided to learn how to use pytorch to implement my models. I'm a beginner with both, so apologies in advance for my inexperience, I am doing my best to follow tutorials, but my limited experience combined with most examples being in different uses has resulted in slower comprehension. I'm trying to use NiN blocks as described here ([https://d2l.ai/chapter_convolutional-modern/nin.html](https://d2l.ai/chapter_convolutional-modern/nin.html)) to inform my model's architecture.
I have built a custom dataset class for my data, the X data is genetic sequence 256 bases long (i.e. "AGCTGGAGCT..."), so the resulting array after one-hotting to four channels for each of the four bases looks like` [[[1,0,0,0],[0,1,0,0]...], [0,0,1,0], ...]]` and has shape 48,976, 256, 4. I read that Conv1d looks for channels first, so I permuted the channels in the dataset's tensor to read in that way, resulting in` torch.Size([48976, 4, 256])`. The Y data is 2 values for a given sequence of X, ESC and TSC, each numeric values derived from other source data. The dataset code is as follows:
```
device = "cuda" if torch.cuda.is_available() else "cpu"
def onehotseq(dataset, input_shape):
onehot = np.zeros(input_shape)
for i in range(0, dataset.shape[0]):
seq = dataset.iloc[i,1]
for c in range(0,len(seq)):
if (seq[c] == "A"):
onehot[i,c,:] = [1,0,0,0]
elif (seq[c] == "C"):
onehot[i,c,:] = [0,1,0,0]
elif (seq[c] == "G"):
onehot[i,c,:] = [0,0,1,0]
elif (seq[c] == "T"):
onehot[i,c,:] = [0,0,0,1]
return onehot
class EpiDataset(torch.utils.data.Dataset):
def __init__(self, Seq_filepath="path_to_sequence_data", Y_data_filepath="path_to_output_data"):
self.seq_data = pd.read_csv(Seq_filepath, sep="\t", header=None)
self.seq_data.rename(columns={0:"id", 1:"seq"}, inplace=True)
self.y_data = pd.read_csv(Y_data_filepath, sep="\t", header = 0)
self.y_data["ESC"] = np.log2((self.y_data["ESC.H3K27ac"].values+1)/(self.y_data["ESC.input"].values+1))
self.y_data["TSC"] = np.log2((self.y_data["TSC.H3K27ac"].values+1)/(self.y_data["TSC.input"].values+1))
self.dataset = self.seq_data.merge(self.y_data, on="id")
self.list_IDs = self.dataset["id"]
self.seq = self.dataset["seq"]
self.esc = self.dataset["ESC"]
self.tsc = self.dataset["TSC"]
self.input_shape = (self.dataset.shape[0], 256, 4)
self.onehotseq = onehotseq(self.dataset, self.input_shape)
self.tensorX = torch.from_numpy(self.onehotseq)
self.tensorX = self.tensorX.permute(0, 2, 1)
self.labels = self.dataset[["ESC","TSC"]].to_numpy()
self.tensorY = torch.from_numpy(self.labels)
def __len__(self):
return len(self.list_IDs)
def __getitem__(self, index):
ID = self.list_IDs[index]
seq = self.seq[index]
esc = self.esc[index]
tsc = self.tsc[index]
return {
"ID: ": ID,
"sequence: ": seq,
"ESC: ": esc,
"TSC: ": tsc
}
```
This all seems to work as intended, and I was able to design a Module class, which also seems to be functionally correct, but I get a type error whenever I try to use the model. The code and error are:
```
def nin_block(out_channels, kernel_size, padding="same"):
return nn.Sequential(
nn.LazyConv1d(out_channels, kernel_size, padding),
nn.ReLU(),
nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU(),
nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU()
)
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.NiN = nn.Sequential(
nin_block(32, kernel_size=11,padding="same"),
nn.MaxPool1d(3, stride=2),
nin_block(64, kernel_size=4, padding="same"),
nn.MaxPool1d(3, stride=2),
nin_block(128, kernel_size=4, padding="same"),
nn.MaxPool1d(3, stride=2),
nin_block(256, kernel_size=3, padding="same"),
nn.MaxPool1d(3, stride=2),
nn.Dropout(0.4),
nin_block(4, kernel_size=3, padding="same"),
nn.AdaptiveAvgPool1d(2),
nn.Flatten(),
)
def forward(self, x):
x = self.flatten(x)
logits = self.NiN(x)
return logits
```
Error message, resulting from running `model = NeuralNetwork().to(device)` and then
`logit = model(x.tensorX)`
```
TypeError: conv1d() received an invalid combination of arguments - got (Tensor, Parameter, Parameter, tuple, tuple, tuple, int), but expected one of:
* (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, tuple of ints padding, tuple of ints dilation, int groups)
didn't match because some of the arguments have invalid types: (Tensor, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)
* (Tensor input, Tensor weight, Tensor bias, tuple of ints stride, str padding, tuple of ints dilation, int groups)
didn't match because some of the arguments have invalid types: (Tensor, !Parameter!, !Parameter!, !tuple!, !tuple!, !tuple!, int)
```
My question is, what am I doing wrong either in building my module or dataset, or am I missing a step? The data loaded in is the prepared data for initial exploration/training of different model architectures.
|
Difficulty loading data/running model on custom dataset derrived from DNA sequence data - TypeError when attempting to run model
|
CC BY-SA 4.0
| null |
2022-08-24T20:33:00.647
|
2022-08-26T20:47:24.390
| null | null |
139561
|
[
"neural-network",
"dataset",
"cnn",
"pytorch",
"bioinformatics"
] |
I was able to get help off-site, and the issue was feeding "same" into padding at the wrong stage. The correct code would be:
```
def nin_block(out_channels, kernel_size, padding):
return nn.Sequential(
nn.LazyConv1d(out_channels, kernel_size, padding=padding),
nn.ReLU(),
nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU(),
nn.LazyConv1d(out_channels, kernel_size=1), nn.ReLU()
)
class NeuralNetwork(nn.Module):
def __init__(self):
super(NeuralNetwork, self).__init__()
self.flatten = nn.Flatten()
self.NiN = nn.Sequential(
nin_block(32, kernel_size=11,padding="same"),
nn.MaxPool1d(4, stride=2),
nin_block(64, kernel_size=4, padding="same"),
nn.MaxPool1d(4, stride=2),
nin_block(128, kernel_size=4, padding="same"),
nn.MaxPool1d(4, stride=2),
nin_block(256, kernel_size=4, padding="same"),
nn.MaxPool1d(4, stride=2),
nn.Dropout(0.4),
nin_block(4, kernel_size=4, padding="same"),
nn.AdaptiveAvgPool1d(2),
nn.Flatten(),
)
def forward(self, x):
x = self.flatten(x)
logits = self.NiN(x)
return logits
```
I'm still troubleshooting some other errors/issues, but marking this as closed given the specified error has been resolved.
|
Problem importing dataset
|
You can follow the below steps to load a json file,
First check whether the file is json or not using the following; `https://jsonlint.com/`. Once you are confirmed the file is a json, use the below code to read it.
```
with open("training_dataset.json") as datafile:
data = json.load(datafile)
dataframe = pd.DataFrame(data)
```
I hope the above will help you.
|
113839
|
1
|
113844
| null |
1
|
63
|
I'm looking at the results of an ML model I made and I've calculated the PPV, TPR, NPV and TNR. As is expected, there is a tradeoff between the PPV and TPR (from which the F1 score can be calculated) but I was wondering if a similar relationship exists between NPV and TNR, as I have observed that in my results - if so, is there a similar metric to the F1 score for these measurements?
Edit: is it even necessary to look at the NPV and TNR? [Wikipedia](https://en.wikipedia.org/wiki/Precision_and_recall) (I know, not a great source) says that a perfect precision eliminates false positives and a perfect recall eliminates false negatives, so what does knowing the NPV and TNR bring to the table? Because surely a perfect NPV eliminates false negatives and a perfect TNR eliminates false positives, so they don't really add any insight into the model.
Thanks!
|
PPV/TPR equivalent for negative results
|
CC BY-SA 4.0
| null |
2022-08-25T10:52:56.757
|
2022-08-25T14:27:13.357
|
2022-08-25T11:20:31.653
|
139637
|
139637
|
[
"machine-learning"
] |
The logic of binary classification measures is as follows:
Usually there is a natural 'positive' class for the application, and if not one is defined by convention. Evaluation measures are supposed to represent how well a model recognizes this positive class by contrast to the negative one. Naturally if the model can identify the positive class well then it means that it distinguishes the two classes well, therefore it also identifies the negative class well. This is why there is no need for a negative-focused equivalent of F1-score (especially since the positive class is chosen based on the application), but it could perfectly be defined indeed.
There's indeed no particular need for negative-focused measures like NPV and TNR, these values are sometimes useful in specific applications but they do not provide any additional information about the ability of the model. The 2 dimensions which are needed are precision (PPV) and recall (TPR).
For the record the [Wikipedia page on precision/recall](https://en.wikipedia.org/wiki/Precision_and_recall) is a good reference :)
|
Handling unwanted negative numbers
|
How to handle invalid values like this is an extremely common problem in machine learning, since most datasets contain errors of some kind.
There are a few ways to do it. For example, you could set them all to 0:
```
df.loc[df.SoilHumidity < 0, 'SoilHumidity'] = 0
```
Or you could fill them with the avg(SoilHumidity), and create an extra feature to flag to the model that they were missing:
```
import numpy as np
df['SoilHumidityInvalid'] = np.where(df.SoilHumidity < 0, 1, 0)
df.loc[df.SoilHumidity < 0, 'SoilHumidity'] = df.SoilHumidity.mean()
```
Or, you can try to impute them somehow. Either by back or forward filling (I.E. taking the value from the next or the previous row in your dataset) or by creating a model that uses the other features of your dataset to predict what these invalid values should be.
The right method can depend; sometimes domain knowledge guides you (i.e. if you know the sensor can mistakenly read negatives when it should read 0, then you know to fill with 0). Failing that, I would just try a couple of methods and use cross-validation to see which improves your model the most.
|
113860
|
1
|
113863
| null |
1
|
122
|
Average precision, balanced accuracy, F1-score, Matthews Correlation Coefficient, geometrics means are the few evaluation metrics for imbalanced data. However, all this metrics can lead to different 'best' model. How do we then decide which is indeed the 'best' model?
|
Average precision, balanced accuracy, F1-score, Matthews Correlation Coefficient, geometric means
|
CC BY-SA 4.0
| null |
2022-08-26T06:55:23.187
|
2022-08-26T09:29:10.637
| null | null |
136687
|
[
"machine-learning",
"metric"
] |
It's about designing the task properly. I'm not talking about the design of the model, and this is not either about selecting an evaluation based on the characteristics of the data (e.g. there's no simple way to decide based on whether the data is imbalanced or not).
The design of standard tasks is usually established in the state of the art. Take machine translation (MT) for example, there is a whole area of research devoted to evaluating MT, with various simple and advanced evaluation measures designed specifically for the task.
People often confuse 'standard type of task' and 'standard task', for example assuming that all the classification tasks can be evaluated the same way. Of course there are standard measures which are used very often in classification, but even with "regular" classification one should ensure that the evaluation measure fits the task.
So how does one select the "best" evaluation measure for a task, when the task is not standard? First it's important to realize that a performance score is always a simplification, so there's no perfect evaluation (btw this is why it's sometimes relevant to use several measures). The goal of the evaluation is to represent how well the model does the job, whatever the job is. This often implies human annotations, sometimes by experts, in order to represent what the job should be about. Depending on the task, it is sometimes relevant to compare different evaluation measures: what are their similarities, differences, possible biases or limitations, and which one fits the target task the best.
In short: there is no simple answer for evaluation, it's not about applying technical rules but about analyzing the specific target task.
|
Balanced Accuracy vs. F1 Score
|
One major difference is that the F1-score does not care at all about how many negative examples you classified or how many negative examples are in the dataset at all; instead, the balanced accuracy metric gives half its weight to how many positives you labeled correctly and how many negatives you labeled correctly.
When working on problems with heavily imbalanced datasets AND you care more about detecting positives than detecting negatives (outlier detection / anomaly detection) then you would prefer the F1-score more.
Let's say for example you have a validation set than contains 1000 negative samples and 10 positive samples. If a model predicts there are 15 positive examples (5 truly positive and 10 it incorrectly labeled) and predicts the rest as negative, thus
```
TP=5; FP=10; TN=990; FN=5
```
Then its F1-score and balanced accuracy will be
$Precision = \frac{5}{15}=0.33...$
$Recall = \frac{5}{10}= 0.5$
$F_1 = 2 * \frac{0.5*0.33}{0.5+0.3} = 0.4$
$Balanced\ Acc = \frac{1}{2}(\frac{5}{10} + \frac{990}{1000}) = 0.745$
You can see that balanced accuracy still cares about the negative datapoints unlike the F1 score.
For even more analysis we can see what the change is when the model gets exactly one extra positive example correctly and one negative sample incorrectly:
```
TP=6; FP=9; TN=989; FN=4
```
$Precision = \frac{6}{15}=0.4$
$Recall = \frac{6}{10}= 0.6$
$F_1 = 2 * \frac{0.6*0.4}{0.6+0.4} = 0.48$
$Balanced\ Acc = \frac{1}{2}(\frac{6}{10} + \frac{989}{1000}) = 0.795$
Correctly classifying an extra positive example increased the F1 score a bit more than the balanced accuracy.
Finally let's look at what happens when a model predicts there are still 15 positive examples (5 truly positive and 10 incorrectly labeled); however, this time the dataset is balanced and there are exactly 10 positive and 10 negative examples:
```
TP=5; FP=10; TN=0; FN=5
```
$Precision = \frac{5}{15}=0.33...$
$Recall = \frac{5}{10}= 0.5$
$F_1 = 2 * \frac{0.5*0.33}{0.5+0.3} = 0.4$
$Balanced\ Acc = \frac{1}{2}(\frac{5}{10} + \frac{0}{0}) = 0.25$
You can see that the F1-score did not change at all (compared to the first example) while the balanced accuracy took a massive hit (decreased by 50%).
This shows how F1-score only cares about the points the model said are positive, and the points that actually are positive, and doesn't care at all about the points that are negative.
|
113874
|
1
|
113884
| null |
5
|
143
|
I understand the softmax equation is
$\boldsymbol{P}(y=j \mid x)=\frac{e^{x_{j}}}{\sum_{k=1}^{K} e^{x_{k}}}$
My question is: why use $e^x$ instead of say, $3^x$. I understand $e^x$ is it's own derivative, but how is that advantageous in this situation?
I'm generally trying to understand why euler's number appears everywhere, especially in statistics and probability, but specifically in this case.
|
What is the advantage of using Euler's number (e^x) instead of another base in the softmax equation?
|
CC BY-SA 4.0
| null |
2022-08-26T15:06:39.390
|
2022-08-26T20:58:51.587
| null | null |
139699
|
[
"machine-learning",
"statistics",
"activation-function",
"softmax"
] |
Choosing a different base would squash the graph of the function uniformly in the horizontal direction, since
$$ a^x = e^{x\cdot \ln(a)}. $$
The exponential function with base $e$ is widely considered the simplest exponential function. It has nice properties that no other base has, mainly:
- The function $e^x$ is its derivative.
- It has a particularly simple power series expansion:
$$ e^x = 1 + x + \frac12 x^2 + \frac16 x^3 + \cdots + \frac1{n!}x^n + \cdots $$
All of the coefficients are rational numbers. If the base had been something intuitively "nicer" than $e$, such as an integer, the coefficients would need to be irrational.
For this reason, most mathematicians will pick $e^x$ when they need an exponential function and have no particular reason to choose one base over another. (Except for computer scientists and information theorists, who sometimes prefer $2^x$).
|
What is the advantage of using log softmax instead of softmax?
|
There are a number of advantages of using [log softmax over softmax](https://stats.stackexchange.com/questions/289369/log-probabilities-in-reference-to-softmax-classifier) including practical reasons like improved numerical performance and [gradient optimization](https://stats.stackexchange.com/questions/174481/why-to-optimize-max-log-probability-instead-of-probability). These advantages can be extremely important for implementation especially when training a model can be computationally challenging and expensive. At the heart of using log-softmax over softmax is the use of [log probabilities](https://en.wikipedia.org/wiki/Log_probability) over probabilities, which has nice information theoretic interpretations.
When used for classifiers the log-softmax has the effect of heavily penalizing the model when it fails to predict a correct class. Whether or not that penalization works well for solving your problem is open to your testing, so both log-softmax and softmax are worth using.
|
113898
|
1
|
114052
| null |
1
|
60
|
In pandas, if I use `series.apply()` to apply a function with an inner function definition, for example:
```
def square_times_two(x):
def square(y):
return y ** 2
return square(x) * 2
data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
df = pd.DataFrame.from_dict(data)
df["col_3"] = df.col_1.apply(square_times_two)
```
is the inner function redefined for each row? Would there be a performance impact to having many inner functions in a function applied to a large series?
|
do inner functions have a substantial impact when used in series.apply() in Pandas
|
CC BY-SA 4.0
| null |
2022-08-27T19:41:25.600
|
2022-09-02T19:07:02.837
|
2022-08-28T01:58:37.133
|
29169
|
139740
|
[
"python",
"pandas",
"dataframe",
"functions"
] |
The function will only be compiled once, but there may be a small overhead. This should be neglegible though, since the inner function does not use vars from the outer one.
Yet, for the same reason, there does not seem to be the necessity to define the inner function there, right? You could just move it to the same level as the outer one.
```
def square(y):
return y ** 2
def square_times_two(x):
return square(x) * 2
data = {'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']}
df = pd.DataFrame.from_dict(data)
df["col_3"] = df.col_1.apply(square_times_two)
```
```
|
What is the efficient way to use apply method in column of pandas Dataframe for large dataset?
|
You can try `pandarallel` it works very efficiently for parallel processing. You can find more information about it [here](https://github.com/nalepae/pandarallel).
You should not use this if your apply function is a lambda function. Now assuming you're trying to apply it on a DataFrame called `df`:
```
from pandarallel import pandarallel
pandarallel.initialize(nb_workers=n) #n is the number of worker used for parallelization, you can leave it blank and it will use all the cores
def foo(x):
return #what ever you're trying to compute
df.parallel_apply(foo, axis=1) #if you're applying to multiple columns
df[column].parallel_apply(foo) # if its just one column
```
Another option you can try is using the python `multiprocessing` library, here you will break your dataframe into smaller chunks and run them together.
```
import numpy as np
from multiprocessing import cpu_count, Parallel
cores = cpu_count() #Gets number of CPU cores on your machine
partitions = cores #Define number of partitions
def parallelize(df, func):
df_split = np.array_split(df, partitions)
pool = Pool(cores)
df = pd.concat(pool.map(func, df_split))
pool.close()
pool.join()
return df
```
Now you can run this `parallelize` function on your `df`:
```
df = parallelize(df, foo)
```
The more number of cores you have the faster this will be!
|
113905
|
1
|
114251
| null |
1
|
147
|
I am following and expanding upon previous work from the winner of the [Melanoma Classification](https://www.kaggle.com/competitions/siim-isic-melanoma-classification/overview) from [here](https://github.com/haqishen/SIIM-ISIC-Melanoma-Classification-1st-Place-Solution).
The dataset has 9 classes. The competition is only interested in the one class (Melanoma).
I have taken the feature outputs (pre-final layer) from CNN and performed clustering. Then used this to group different classes (leaving Melanoma as its own group) then used this in the training.
I have already performed clustering with other steps (PCA, TSNE, K-Means, Hierarchichal, LDA, QDA, NDA, etc.) and have results. I am largely trying to understand the maths (and background research) behind why this approach in retraining might improve performance (on the ROC-AUC of the class that was not grouped ie. melanoma)
Any advice / relevant papers welcome!
Thanks,
|
Using the results of clustering to retrain a neural network
|
CC BY-SA 4.0
| null |
2022-08-28T11:20:41.150
|
2022-09-14T08:12:03.827
|
2022-09-11T10:36:30.973
|
139758
|
139758
|
[
"clustering",
"convolutional-neural-network",
"pytorch",
"feature-extraction",
"retraining"
] |
I would agree with [Brian's answer](https://datascience.stackexchange.com/a/114250/100269) in the following sense:
All the steps you perform ie embedding, clustering, retraining,.. do not represent, in principle, qualitatively different math operations than what a dedicated deep model with non-linearities can do.
So, in this sense, I do not expect to get radically different performance than using a single NN model trained end-to-end.
That being said, whatever approach one uses (as I expect them to be equivalent) one will possibly have to deal with class imbalance wisely.
|
Clustering Data to learned cluster
|
The word "prediction" does not belong to any specific type of machine learning. There is nothing wrong with "predicting" new data to the cluster it belongs to; (e.g. there are many applications that place new customers into pre-discovered market segments). A conditional probability, like that used in classification, is not "stronger" than an unsupervised approach, as it rests its assumption on properly labelled classes; something that is not guaranteed.
This is why there are packages that provide a predict function to clustering algorithms. [Here](https://stackoverflow.com/questions/20621250/simple-approach-to-assigning-clusters-for-new-data-after-k-means-clustering) is an example using the flexclust package with the kcaa function. That being said, the prediction step is usually handled by a supervised classifier, so the approach would be to sit a classifier on top of your learned clusters (treating cluster assignments as "labels").
You just have to reason about your weaknesses. As stated above, the weakness in classification is the assumption that labelled data is tagged correctly, whereas the weakness in clustering is that your discovered clusters are assumed to be valid. Unsupervised approached cannot be validated the same way it is done with classification. Clustering requires a variety of cluster validity techniques along with domain experience (e.g. show campaign managers your market segments to validate customer types).
Ultimately, you are just matching an incoming vector (new data) to the cluster most similar. For example, in k-means this could be accomplished by finding the smallest distance between the incoming vector and all the centroids of your clusters. This kind of pattern matching depends on the data you are using.
This works best for clustering techniques that have well-defined cluster objects with exemplars in the center, like k-means. Using hierarchical techniques means you would need to cut the tree to obtain flat clusters, then use the "label" assignment to run a classifier on top. This comes baked with a lot of assumptions, so you need to make sure you understand your data very well, and validate any clusters with non-technical users that have deep domain experience.
POSSIBLE APPROACH
If you're bent on using hierarchical clustering, then here is the general approach. Note I am not suggesting this is the best way. Every approach comes baked with a number of assumptions. You will need to work to understand your data, attempt many models, validate with stakeholders, etc.
Readers can use the [tutorial](https://joernhees.de/blog/2015/08/26/scipy-hierarchical-clustering-and-dendrogram-tutorial/) by Jörn Hees to get started in hierarchical clustering if needed:
Create some example data:
```
from matplotlib import pyplot as plt
from scipy.cluster.hierarchy import dendrogram, linkage
import numpy as np
np.random.seed(42)
a = np.random.multivariate_normal([10, 0], [[3, 1], [1, 4]], size=[100,])
b = np.random.multivariate_normal([0, 20], [[3, 1], [1, 4]], size=[50,])
X = np.concatenate((a, b),)
```
Confirm clusters exist in synthetic data:
```
plt.scatter(X[:,0], X[:,1])
plt.show()
```
[](https://i.stack.imgur.com/umdkg.png)
Generate the linkage matrix using the Ward variance minimization algorithm:
(This assumes your data should be be clustered to minimize the overall intra-cluster variance in euclidean space. If not, try Manhattan, cosine or hamming. You can also try different linking options).
```
Z = linkage(X, 'ward')
```
Check the Cophenetic Correlation Coefficient to assess quality of clusters:
```
from scipy.cluster.hierarchy import cophenet
from scipy.spatial.distance import pdist
c, coph_dists = cophenet(Z, pdist(X))
0.98001483875742679
```
Calculate full dendrogram:
```
plt.figure(figsize=(25, 10))
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('sample index')
plt.ylabel('distance')
dendrogram(
Z,
leaf_rotation=90., # rotates the x axis labels
leaf_font_size=8., # font size for the x axis labels
)
plt.show()
```
[](https://i.stack.imgur.com/pNIHP.png)
Determine the number of clusters (e.g. can be done manually by looking for any large jumps in the dendrogram...see Jörn's blog for plotting function):
[](https://i.stack.imgur.com/vdhRL.png)
Retrieve clusters: (using our max distance determined from reading dendrogram)
```
from scipy.cluster.hierarchy import fcluster
max_d = 50
clusters = fcluster(Z, max_d, criterion='distance')
```
Map cluster assignments back to original frame:
```
import pandas as pd
def add_clusters_to_frame(or_data, clusters):
or_frame = pd.DataFrame(data=or_data)
or_frame_labelled = pd.concat([or_frame, pd.DataFrame(clusters)], axis=1)
return(or_frame_labelled)
df = add_clusters_to_frame(X, clusters)
df.columns = ['A', 'B', 'cluster']
df.head()
```
[](https://i.stack.imgur.com/1cG0Z.png)
Build a classifier using this "labelled" data:
Here, I'll just use the original data and the assigned clusters along with a knn classifier:
```
np.random.seed(42)
indices = np.random.permutation(len(X))
X_train = X[indices[:-10]]
y_train = clusters[indices[:-10]]
X_test = X[indices[-10:]]
y_test = clusters[indices[-10:]]
# Create and fit a nearest-neighbor classifier
from sklearn.neighbors import KNeighborsClassifier
knn = KNeighborsClassifier()
knn.fit(X_train, y_train)
res = knn.predict(X_test)
print(res)
print(y_test)
```
predicted labels: [2 2 1 1 2 2 1 2 2 1]
test labels: [2 2 1 1 2 2 1 2 2 1]
As with any classifier, your incoming data needs to be in the same representation as your training data. As new data arrives you run it against the predict function provided by your classifier (here we use sci-kit learn's knn.predict). This effectively assign new data to the cluster it belongs.
Ongoing cluster validation would be required in the model monitoring step of the machine learning workflow. New data can change the distribution and results of your approach. BUT, this isn't unique to unsupervised as all machine learning approaches will suffer from this (all models eventually go stale). As argued by Jörn in the reference above, manual inspection typically trumps automated approaches, so regular visual/manual inspection of the flat clusters is recommended.
|
113936
|
1
|
114973
| null |
0
|
92
|
I was researching about "why are we freezing layers" and I came across the answer says "to not lose the information of pre-trained model" But; we are just freezing early layers (I know why). For example: our data is so similar to the data that the model trained on. Let's say we are not freezing any layer. The model will make very small mistakes and convergence will be less, we will not be destroying any information (even if, weights will change very little). Am I wrong?
If I am not, then why are we freezing any layer?
|
Weird consequence of not freezing layers in Neural Network
|
CC BY-SA 4.0
| null |
2022-08-29T16:44:15.447
|
2022-10-06T21:43:37.620
|
2022-08-29T22:56:22.040
|
29169
|
133184
|
[
"machine-learning",
"deep-learning",
"keras",
"training",
"transfer-learning"
] |
If the data is already similar, it doesn't make sense to train the lower layers (backbone), as your network will already be good for extracting features. Then you freeze them to quickly train your classifier (head).
As stated in the link quoted by Adrian, new layers have large gradients in the first epocs and this can affect the model. So if your data is similar but with new information, large gradient updates during training will destroy your pre-trained features, It's applied for fine-tuning too and you can check here.
[https://keras.io/guides/transfer_learning/](https://keras.io/guides/transfer_learning/)
If the data is different for example you want to train a pre-trained model on imagenet to classify brain tumors, then losing these features doesn't make much difference, it would be better to freeze only the first layers that already can extract low-level features such as the horizontal/vertical edges.
|
What are the consequences of not freezing layers in transfer learning?
|
I think that the main consequences are the following:
- Computation time: If you freeze all the layers but the last 5 ones, you only need to backpropagate the gradient and update the weights of the last 5 layers. In contrast to backpropagating and updating the weights all the layers of the network, this means a huge decrease in computation time. For this reason, if you unfreeze all the network, this will allow you to see the data fewer epochs than if you were to update only the last layers weights'.
- Accuracy: Of course, by not updating the weights of most of the network your are only optimizing in a subset of the feature space. If your dataset is similar to any subset of the imagenet dataset, this should not matter a lot, but, if it is very different from imagenet, then freezing will mean a decrease in accuracy. If you have enough computation time, unfreezing everything will allow you to optimize in the whole feature space, allowing you to find better optima.
To wrap up, I think that the main point is to check if your images are comparable to the ones in imagenet. In this case, I would not unfreeze many layers. Otherwise, unfreeze everything but get ready to wait for a long training time.
|
113938
|
1
|
113963
| null |
0
|
34
|
Let's assume we have data about students in grade 10. We have test scores ranging from 0-100, however we are only provided two labels ; High score = if the score> 80% and low score if the score < 80%.
Suppose we train a tree-based classifier, will the model learning to interpolate as well? When a calibrated tree is 10% confident that a record (A) is low complexity vs 40% for another record(B) - can we say that record B is likely to have a higher score than record A.
How can we train a model to learn this without explicitly provided the absolute score?
[Edit] - Assume you have freedom to get all the input features you want. Ex: Family income, hours studies etc. in the training set.
|
If we train a binary classifier (lets say tree based) to predict ordinal data do they learn to interpolate?
|
CC BY-SA 4.0
| null |
2022-08-30T02:00:05.377
|
2022-08-30T21:08:39.830
|
2022-08-30T19:56:29.907
|
139825
|
139825
|
[
"machine-learning",
"classification",
"statistics",
"gradient-boosting-decision-trees"
] |
Depending on data and model fit, it is possible that confidence scores could proxy (relative) predicted performance. However, you cannot guarantee the relationship you describe would occur.
Even if some relationship to this effect does occur, confidence scores would not be easily interpretable. At best you may be able to produce a rough ordering of exam scores, which may produce reasonable results on aggregate. It is unlikely to be suitable for direct comparison of two samples, or for estimating absolute exam scores.
It would be easier to comment further with more information on the desired use case. Also note that this is far less likely to be effective if your model overfits on the training set.
|
Can Boosted Trees predict below the minimum value of the training label?
|
Yes, gradient boosted trees can make predictions outside the training labels' range. Here's a quick example:
```
from sklearn.datasets import make_classification
from sklearn.ensemble import GradientBoostingRegressor
X, y = make_classification(random_state=42)
gbm = GradientBoostingRegressor(max_depth=1,
n_estimators=10,
learning_rate=1,
random_state=42)
gbm.fit(X,y)
preds = gbm.predict(X)
print(preds.min(), preds.max())
```
outputs `-0.010418732339562916 1.134566081403055` (and `make_classification` gives outputs just 0 and 1).
Now, this is unrealistic for a number of reasons: I'm using a regression model for a classification problem, I'm using learning rate 1, depth only 1, no regularization, etc. All of these could be made more proper and we could still find an example with predictions outside the training range, but it would be harder to construct such an example. I would say that in practice, you're unlikely to get anything very far from the training range.
See the (more theoretical) example in [this comment of an xgboost github issue](https://github.com/dmlc/xgboost/issues/1581#issuecomment-249853718), found via [this cv.se post](https://stats.stackexchange.com/q/304962/232706).
---
To be clear, decision trees, random forests, and adaptive boosting all cannot make predictions outside the training range. This is specific to gradient boosted trees.
|
113953
|
1
|
113958
| null |
5
|
483
|
As stated in the title, how do you manually calculate the variance of the least squares estimator in R?
I know that the least estimates have the following formula:
$$\hat{\beta}=(X^TX)^{-1} X^T Y, $$
and the variance of the least squares estimator is given by
$$Var(\hat{\beta}) = σ^2(X^TX)^{−1}$$
My question clearly stated how to do that "manually," so I can understand that concept comprehensively. An R example would only serve to help me understand this concept. I can easily find in R ($^T$), but what about $\sigma^2$?
|
How to manually calculate the variance of the least squares estimator in R
|
CC BY-SA 4.0
| null |
2022-08-30T15:24:47.677
|
2022-08-30T18:43:48.887
| null | null |
139846
|
[
"regression",
"r"
] |
Let's build the entire example (You can use the [Wikipedia](https://en.wikipedia.org/wiki/Least_squares) page for reference on all formula below):
First, generate the model parameters (p=4 in this case)
```
p <- 4
beta <- rnorm(p)
```
Next let's simulate some observations from a linear model:
```
n <- 100
X <- cbind(1,t(replicate(n, rnorm(p-1))))
epsilon <- rnorm(n)
y <- X%*%beta + epsilon
```
Let's also obtain our estimates for $\beta_i$:
```
beta_hat <- solve((t(X)%*%X))%*%t(X)%*%y
```
Now we can calculate model predictions:
```
pred <- X%*%beta_hat
```
And finally calculate $\hat{\sigma}^2$:
```
sigma_2 <- sum((y-pred)^2)/(n-p)
```
Just for sports let's also calculate $Var(\hat{\beta})$:
```
beta_hat_covariance <- solve(t(X)%*%X)*sigma_2
beta_hat_var <- diag(beta_hat_covariance)
```
To make sure all our calculations are correct we can believe the `lm` function is the source of truth and do:
```
my_lm_summary <- summary(lm(y ~ X-1))
# our calculation:
sigma_2
# is the same as:
my_lm_summary$sigma^2
# our calculation:
beta_hat_var
# is the same as:
my_lm_summary$coefficients[,2]^2
```
|
How to estimate the variance of regressors in scikit-learn?
|
I believe it is the probabilistic nature of a model that allows you to get the variance of predictions, or more generally defined as the uncertainty of predictions, like the Gaussian process you mentioned. This is not simply avaialble in standard regressors.
I think you should be looking at Probabilistic regressors like BayesianRidge if you would like to estimate the uncertainty of your model. An implementation is also avaialble in [scikit-learn](http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.BayesianRidge.html#sklearn.linear_model.BayesianRidge), also [this nice python package](https://github.com/markdregan/Bayesian-Modelling-in-Python) based on PyMC3 or directly via [PyMC3](https://docs.pymc.io/notebooks/GLM-linear.html) itself for instance. In the latter there are examples like for Bayesian regression in Jupyter Notebook with a good explanation.
In principle, Bayesian Models do not return a a single estimate for the model parameters, but a distribution that make it possible to make inferences about new observations as well as to examine our uncertainty in the model. You may find this [post](https://towardsdatascience.com/bayesian-linear-regression-in-python-using-machine-learning-to-predict-student-grades-part-2-b72059a8ac7e) useful.
Note: Adding a normal prior on the weights as it is done in Bayesian regression, one turn the Least-Squares problem to regularized L2 regression under the hood as well (see the full math. derivation [here](https://wiseodd.github.io/techblog/2017/01/05/bayesian-regression/)).
Updated Answer: I totally forgot the classical yet simple and powerful Bootstrap Sampling method to calculate confidence intervals for machine learning algorithms. A textbook definition says:
>
Bootstrapping is a nonparametric approach to statistical inference
that substitutes computation for more traditional distributional
assumptions and asymptotic results. A number of advantages:
The bootstrap is quite general, although there are some cases in which it fails.
Because it does not require distributional assumptions (such as normally distributed errors), the bootstrap can provide more accurate
inferences when the data are not well behaved or when the sample size
is small.
It is possible to apply the bootstrap to statistics with sampling distributions that are difficult to derive, even asymptotically.
It is relatively simple to apply the bootstrap to complex data-collection plans (such as stratified and clustered samples).
Reference: Fox, John. Applied regression analysis and generalized linear
models. Sage Publications, 2015.
Please note you do not need a model with probabilistic nature. See this [post](https://machinelearningmastery.com/calculate-bootstrap-confidence-intervals-machine-learning-results-python/), or this [answer](https://stackoverflow.com/questions/16707141/python-estimating-regression-parameter-confidence-intervals-with-scikits-boots) or this [one](https://stats.stackexchange.com/questions/183230/bootstrapping-confidence-interval-from-a-regression-prediction).
|
113971
|
1
|
113986
| null |
0
|
649
|
I am building artificial neuron network (ANN) model for predicting values but facing problem:
Input:
```
def create_model(optimizer = 'rmsprop', units = 16, learning_rate = 0.001):
ann = tf.keras.Sequential() # Initialising ANN
ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding First Hidden Layer
ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding Second Hidden Layer
ann.add(tf.keras.layers.Dense(units = Y.shape[1], activation = 'softmax')) # Adding Output Layer
###############################################
# Add optimizer with learning rate
if optimizer == 'rmsprop':
opt = tf.keras.optimizers.RMSprop(learning_rate = learning_rate)
elif optimizer == 'adam':
opt = tf.keras.optimizers.Adam(learning_rate = learning_rate)
elif optimizer == 'SGD':
opt = tf.keras.optimizers.SGD(learning_rate = learning_rate)
else:
raise ValueError('optimizer {} unrecognized'.format(optimizer))
##############################################
ann.compile(optimizer = optimizer, loss = 'categorical_crossentropy', metrics = ['accuracy']) # Compiling ANN
return ann
ann = KerasClassifier(model = create_model,
verbose = 2,
learning_rate = 0.001,
units = 16
)
optimizers = ['rmsprop', 'adam', 'SGD']
epoch_values = [10, 25, 50, 100, 150, 200]
batches = [10, 20, 30, 40, 50, 100, 1000]
units = [16, 32, 64, 128, 256]
lr_values = [0.001, 0.01, 0.1, 0.2, 0.3]
hyperparameters = dict(optimizer = optimizers,
epochs = epoch_values,
batch_size = batches,
units = units,
learning_rate = lr_values
)
grid = GridSearchCV(estimator = ann, cv = 5, param_grid = hyperparameters)
history = grid.fit(X_train,
Y_train,
batch_size = 32,
validation_data = (X_test, Y_test),
epochs = 100
) # Fitting ANN
```
Output error:
```
File c:\Users\dis\AppData\Local\Programs\Python\Python310\lib\site-packages\sklearn\model_selection\_search.py:875, in BaseSearchCV.fit(self, X, y, groups, **fit_params)
869 results = self._format_results(
870 all_candidate_params, n_splits, all_out, all_more_results
871 )
873 return results
--> 875 self._run_search(evaluate_candidates)
877 # multimetric is determined here because in the case of a callable
878 # self.scoring the return type is only known after calling
...
self._check_model_compatibility(y)
File "c:\Users\dis\AppData\Local\Programs\Python\Python310\lib\site-packages\scikeras\wrappers.py", line 551, in _check_model_compatibility
if self.n_outputs_expected_ != len(self.model_.outputs):
TypeError: object of type 'NoneType' has no len()
```
Data:
- X.shape -> (10, 2066)
- Y.shape -> (10, 4)
- X_train.shape -> (8, 2066)
- X_test.shape -> (2, 2066)
- Y_train.shape -> (8, 4)
- Y_test.shape -> (2, 4)
|
TypeError: object of type 'NoneType' has no len() when implementing neural network
|
CC BY-SA 4.0
| null |
2022-08-31T07:35:59.037
|
2022-09-01T07:32:06.970
|
2022-08-31T07:55:31.850
|
138833
|
138833
|
[
"machine-learning",
"python",
"deep-learning",
"neural-network",
"keras"
] |
When you use `Sequential` model in tf.keras you need to provide the `input_shape` in the first layer or to add the input layer.
Modify your code as follows:
```
ann = tf.keras.Sequential() # Initialising ANN
ann.add(tf.keras.layers.Dense(units = units, input_shape=(X.shape[0],), activation = "relu")) # Adding First Hidden Layer
ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding Second Hidden Layer
ann.add(tf.keras.layers.Dense(units = Y.shape[1], activation = 'softmax')) # Adding Output Layer
```
or adding an input layer as follows:
```
ann = tf.keras.Sequential() # Initialising ANN
ann.add(tf.keras.layers.Input(shape=(X.shape[0],))) # Input Layer
ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding First Hidden Layer
ann.add(tf.keras.layers.Dense(units = units, activation = "relu")) # Adding Second Hidden Layer
ann.add(tf.keras.layers.Dense(units = Y.shape[1], activation = 'softmax')) # Adding Output Layer
```
|
Tensorflow neural network TypeError: Fetch argument has invalid type
|
The problem lay in using the name 'cost' on two occasions, the problem was solved by changing this:
```
_, cost = tf_session.run([optimizer, cost], feed_dict = {champion_data: batch_input, item_data: batch_output})
```
to this:
```
_, c = tf_session.run([optimizer, cost], feed_dict = {champion_data: batch_input, item_data: batch_output})
```
This way the name of the variable 'c' doesn't clash anymore with the [optimizer, cost] part of the code.
|
114007
|
1
|
114036
| null |
0
|
104
|
Do we need to apply text cleaning practices for the task of sentence similarity?
Most models are being used with whole sentences that even have punctuation. Here are two example sentences that we wish to compare using SentenceTransformer (all-MiniLM-L6-v2):
```
sentences = [
"Oncogenic KRAS mutations are common in cancer.",
"Notably, c-Raf has recently been found essential for development of K-Ras-driven NSCLCs."]
# yields that sentence 2 has a score of 0.191 when compared with sentence 1
```
Will cleaning those sentences change its semantic meaning?
```
cleaned = ['oncogenic bras mutations common cancer',
'notably c-raf recently found essential development bras driven nsclcs.']
# yields that sentence 2 now has a score of 0.327 when compared to sentence 1
```
It seems the model works better when the text is cleaned. However, nowhere does it say that the input sentences are being / should be cleaned? Would love to know your takes on this.
---
|
Text cleaning when applying Sentence Similarity / Semantic Search
|
CC BY-SA 4.0
| null |
2022-09-01T08:01:24.687
|
2022-09-02T08:52:57.503
|
2022-09-02T08:47:16.270
|
139922
|
139922
|
[
"nlp",
"data-cleaning",
"transformer",
"semantic-similarity"
] |
Answer: Transformer based models used for sentence similarity have been trained on huge amounts of data where the text preprocessing part has been handled either at the tokenization step or by the attention mechanism of the transformer.
Applying cleaning methods and then using the cleaned text as input will worsen the quality of the embeddings. The inputs now differ from the ones the model has been trained with.
The attention mechanism will be the one who will neglect tokens that are meaningless and include ones that are meaningful. In that case, a comma or a number can be meaningful in some context and meaningless in another, hence why we should not clean the text on our own but use it as it is.
|
NLP data cleaning and word tokenizing
|
I summarize your questions and then try to answer under each bullet point:
- How to remove punctuation marks (e.g. # for hashtags which is used in social media)
The first goto is a regular expression that is used in data preprocessing very frequently. But if you are looking for all your punctuation to be removed from the text you can use one of these two approaches:
```
import string
sentence = "hi; how*, @are^ you? wow!!!"
sentence = sentence.translate(sentence.maketrans('', '', string.punctuation))
print(sentence)
```
output: `'hi how are you wow'`
or use a regular expression, example:
```
import re
s = "string. With. Punctuation? 3.2"
s = re.sub(r'[^\w\s]','',s)
print(s)
```
output: `'string With Punctuation 32'`
- What are the possible python library for data cleaning?
generally, NLTK and SciPy are very handly. But for specific purposes, other libraries also exist. For example library `contractions`, `inflect`.
- Which libraries are efficient in data cleaning when using pandas dataframes?
You can apply any function from any library to your pandas dataframe using the `apply` method.
Here is an example:
```
mport pandas as pd
s = pd.read_csv("stock.csv", squeeze = True)
# adding 5 to each value
new = s.apply(lambda num : num + 5)
```
[code source](https://www.geeksforgeeks.org/python-pandas-apply/)
|
114015
|
1
|
114051
| null |
0
|
164
|
Hi I am new to RNN and have come across this the following implementation of Pytorchs LSTM, but I cant understand how (or why) the `'bias'` and `'weight'` strings work in the `'def init_weights'`.
```
class LSTM_LM(nn.Module):
def __init__(
self,
pretrained_emb: torch.tensor,
lstm_dim: int,
drop_prob: float = 0.0,
lstm_layers: int = 1,
):
super(LSTM_LM, self).__init__()
self.vocab_size = pretrained_emb.shape[0]
self.model = nn.ModuleDict({
'embeddings': nn.Embedding.from_pretrained(pretrained_emb, padding_idx=pretrained_emb.shape[0] - 1),
'lstm': nn.LSTM(
pretrained_emb.shape[1],
lstm_dim,
num_layers=lstm_layers,
batch_first=True,
dropout=dropout_prob),
'ff': nn.Linear(lstm_dim, pre.shape[0]),
'drop': nn.Dropout(dropout_prob)
})
# Initialize the weights of the model
self._init_weights()
def _init_weights(self):
all_parameters = list(self.model['lstm'].named_parameters()) + \
list(self.model['ff'].named_parameters())
for n, p in all_parameters:
if 'weight' in n:
nn.init.xavier_normal_(p)
elif 'bias' in n:
nn.init.zeros_(p)
```
EDIT
To be more precise, what part of the code makes it possible to check if the string 'weight' appreas in n? n is as I understand it a parameter but does nn.LSTM consist of weight and bias as stringparameters such that I can access them with LSTM.parameter('weight')[1] for instance?
I am not sure how to understand it in relationhsip (if there is such) to the variable section of: [https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html](https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html)
Update
I am now able to print `all_parameters` of LSTM. It looks like this:
```
[('weight_ih_l0', Parameter containing:
tensor([[-0.5299, 0.0481],
[-0.3032, 0.2907],
[-0.0553, -0.4933],
[-0.2063, -0.2334],
[-0.5127, -0.1538],
[-0.4484, 0.1707],
[-0.3729, 0.3518],
[-0.3200, 0.5846]], requires_grad=True)),
('weight_hh_l0', Parameter containing:
tensor([[-0.6242, 0.5774],
[ 0.7023, -0.3028],
[-0.4403, 0.2972],
[-0.3179, 0.4870],
[ 0.2489, 0.0627],
[ 0.6007, 0.3024],
[-0.3393, 0.1481],
[ 0.1212, -0.6172]], requires_grad=True)),
('bias_ih_l0', Parameter containing:
tensor([-0.2282, -0.0345, -0.3226, -0.5983, -0.0105, 0.3180, -0.1699, -0.5312],
requires_grad=True)),
('bias_hh_l0', Parameter containing:
tensor([ 0.4270, 0.0965, -0.3981, 0.6470, 0.3207, -0.0163, -0.4651, -0.0321],
requires_grad=True)),
('weight', Parameter containing:
tensor([[ 0.2041, 0.5927],
[ 0.4556, 0.1257],
[ 0.5357, -0.1195],
[ 0.0016, -0.1114]], requires_grad=True)),
('bias', Parameter containing:
tensor([ 0.0932, -0.5147, -0.6265, 0.2009], requires_grad=True))]
```
Although I don't see how that match the variables in the pytorch documentation that I linked to above, such as:
>
~LSTM.weight_ih_l[k]
– the learnable input-hidden weights of the \text{k}^{th}k
th
layer (W_ii|W_if|W_ig|W_io), of shape (4hidden_size, input_size) for k = 0. Otherwise, the shape is (4hidden_size, num_directions * hidden_size). If proj_size > 0 was specified, the shape will be (4*hidden_size, num_directions * proj_size) for k > 0
|
pytorchs LSTMs use of 'bias' and 'weight' strings
|
CC BY-SA 4.0
| null |
2022-09-01T11:21:25.040
|
2022-09-02T18:55:00.387
|
2022-09-02T13:38:33.590
|
134964
|
134964
|
[
"lstm",
"rnn",
"pytorch",
"bias",
"weight-initialization"
] |
The function `_init_weights` is simply looping over all parameters and using a Xavier normal initialization for the weights and initializing the biases with a value of zero. The values you see in `all_parameters` match what is mentioned in the pytorch documentation under the 'Variables' header. For example, `weight_ih_l0` and `weight_hh_l0` in your code link back to the variables `weight_ih_l[k]` and `weight_hh_l[k]` that are mentioned in the documentation (with k being zero in this case).
|
Pytorch LSTM not training
|
You look at loss at every batch. You should average your loss over all batches. When you look at different batches your loss may increase simply because one batch is harder to predict than the other one. That's why it's not really interpretable. So start with that. If the problem persists it's probably exploding gradients. In that case lower your learning rate to `1e-3` or `1e-4` or even less if it continues.
|
114019
|
1
|
114037
| null |
1
|
131
|
I want to recreate `catboost.utils.select_threshold`([desc](https://catboost.ai/en/docs/concepts/python-reference_utils_get_roc_curve)) method for `CalibratedClassifierCV` model.
In Catboost I can select desired fpr value, to return the boundary at which the given FPR value is reached.
My goal is to the same logic after computing fpr, tpr and boundaries from `sklearn.metrics.roc_curve`
I have the following code
```
prob_pred = model.predict_proba(X[features_list])[:, 1]
fpr, tpr, thresholds = metrics.roc_curve(X['target'], prob_pred)
optimal_idx = np.argmax(tpr - fpr) # here I need to use FPR=0.1
boundary = thresholds[optimal_idx]
binary_pred = [1 if i >= boundary else 0 for i in prob_pred]
```
I guess it should be simple formula but I am not sure how to place 0.1 value here to adjust threshold.
|
Select threshold (cut-off point )for binary classification by desired fpr persentage value
|
CC BY-SA 4.0
| null |
2022-09-01T14:05:02.967
|
2022-09-02T10:38:44.580
| null | null |
117981
|
[
"classification",
"scikit-learn",
"metric",
"binary-classification",
"catboost"
] |
I've done my research and testing and it's that simple:
```
def select_treshold(proba, target, fpr_max = 0.1 ):
# calculate roc curves
fpr, tpr, thresholds = roc_curve(target, proba)
# get the best threshold with fpr <=0.1
best_treshold = thresholds[fpr <= fpr_max][-1]
return best_treshold
```
|
How to calculate TPR and FPR for different threshold values for classification model?
|
To calculate TPR and FPR for different threshold values, you can follow the following steps:
- First calculate prediction probability for each class instead of class prediction.
- Sorting the testing cases based on the probability values of positive class (Assume binary classes are positive and negative class).
- Then set the different cutoff/threshold values on probability scores and calculate $TPR= {TP \over (TP \ + \ FP)}$ and $FPR = {FP \over (FP \ + \ TN)}$ for each threshold value.
|
114021
|
1
|
114023
| null |
0
|
41
|
I have train/test data for my text classification problem. I have used them to create and test several ML models (LogisticRegression, RandomForest, and LinearSVC).
The train and test data consist of many documents classified into several categories. It is cleaned from dates and numbers, everything is lowercase and with no punctuation. Where the dates are cleaned I have substituted them with the word 'date'. The same approach I have applied to invoice numbers which were replaced with the word 'invoice'. This greatly helped my models because this specific word was given higher weight and it improved classification.
Now that I have chosen the best model I plan to use it for the new data that will be coming. As for this new data, should I clean it before it goes to the trained model (as I clean my train/test data), or am I supposed to leave it as is?
|
Are you supposed to clean new data before it is fed to a machine learning model?
|
CC BY-SA 4.0
| null |
2022-09-01T14:28:06.910
|
2022-09-01T14:40:58.590
| null | null |
85604
|
[
"machine-learning",
"data-cleaning",
"text-classification"
] |
Yes, it makes perfect sense to clean/preprocess the new data much like train /test dataset.
For reference:
[https://stackoverflow.com/questions/66301306/do-you-have-to-clean-your-test-data-before-feeding-into-an-nlp-model][1]
|
When to clean data?
|
Data Cleaning or Data Munging as it is referred in most cases, is the process of transforming the data from the raw form that they exist after their collection into another format with the intent of making it more appropriate for their future process e.g. training models etc..
This process is taking place at the beginning of the whole procedure and before the training and validation of the models. In text mining problems, you have also to treat the punctuation marks, remove the stopwords (it depends on the data representation that you will choose, for unigrams it is fine, but for bigrams it is not recommended at all) and also do the stemming or lemmatization processes.
|
114046
|
1
|
114047
| null |
0
|
96
|
I did a binary classification using "Random Forest".
The code block is
```
clf = RandomForestClassifier()
clf.fit(X_train, y_train)
R_y_pred = clf.predict(X_test)
print(classification_report(y_test, R_y_pred))
```
The result is
```
precision recall f1-score support
0 0.91 0.98 0.94 1023
1 *0.79 0.48* 0.60 185
accuracy 0.90 1208
macro avg 0.85 0.73 0.77 1208
weighted avg 0.89 0.90 0.89 1208
```
When I apply `clf.get_params()` command to see the default parameters, I got
```
{'bootstrap': True,
'ccp_alpha': 0.0,
'class_weight': None,
'criterion': 'gini',
'max_depth': None,
'max_features': 'sqrt',
'max_leaf_nodes': None,
'max_samples': None,
'min_impurity_decrease': 0.0,
'min_samples_leaf': 1,
'min_samples_split': 2,
'min_weight_fraction_leaf': 0.0,
'n_estimators': 100,
'n_jobs': None,
'oob_score': False,
'random_state': None,
'verbose': 0,
'warm_start': False}
```
Now in another code, I defined the `criterion` for RandomForestClassifier
The code block is
```
cri_clf = RandomForestClassifier(criterion = 'gini')
cri_clf.fit(X_train, y_train)
cri_y_pred = cri_clf.predict(X_test)
print(classification_report(y_test, cri_y_pred))
```
The result is
```
precision recall f1-score support
0 0.91 0.98 0.94 1023
1 *0.80 0.46* 0.59 185
accuracy 0.90 1208
macro avg 0.86 0.72 0.77 1208
weighted avg 0.89 0.90 0.89 1208
```
So, you can see that there is a slight difference in the result of precision and recall when I define a criterion explicitly with not defining a criterion.
If all the parameters are the same for two codes why do I get the differences between the two results?
Thank you.
|
Different result of classification with same classifier and same input parameters
|
CC BY-SA 4.0
| null |
2022-09-02T16:24:38.987
|
2022-09-02T16:49:15.690
| null | null |
63745
|
[
"machine-learning",
"scikit-learn",
"pandas",
"random-forest",
"binary-classification"
] |
From [sklearns random forest documentation](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestClassifier.html):
>
random_state int, RandomState instance or None, default=None
Controls both the randomness of the bootstrapping of the samples used when building trees (if bootstrap=True) and the sampling of the features to consider when looking for the best split at each node (if max_features < n_features). See Glossary for details.
Each time you re-run this with `random_state = None` it runs different models.
Set random_state to `0` (or any number) and see consistent results.
|
What circumstances causes two different classifiers to classify data exactly like one another
|
Your results are reasonable. Your data brings several ideas to mind:
1) It is quite reasonable that as you change the available features, this will change the relative performance of machine learning methods. This happens quite a lot. Which machine learning method performs best often depends on the features, so as you change the features the best method changes.
2) It is reasonable that in some cases, disparate models will reach the exact same results. This is most likely in the case where the number of data points is low enough or the data is separable enough that both models reach the exact same conclusions for all test points.
|
114062
|
1
|
114073
| null |
3
|
116
|
I am training a keras model that utilizes `early_stopping` in order to prevent overfitting. This requires that I set aside a validation dataset.
My task requires that I keep my training and validation split by time, so that all samples in my validation set occur after the point in time of those in my training set.
My challenge is that the examples in my validation (by definition the most recent examples in time) are very important for my prediction task and I would like to use them to train a final model. From all I can see, it seems that in general it is recommended to train a final model (to be released to production) on all data available, after model configuration has been decided upon in a traditional train/test period (see [here](https://machinelearningmastery.com/train-final-machine-learning-model/)).
However, if I use all of my data to train a final model, I no longer can utilize `early_stopping`, since I will not have any validation set (it will be being used for training).
I could randomly sample a subset of my training data to use for validation (instead of using the most recent data as I was during training/testing), but then I worry that due to the time series dynamic of the problem I am running the risk of data leakage.
My question really boils down to:
>
What is the preferred way to train a final, production model when in Keras (or another framework)?
Thanks!
|
Should I use validation data and val_loss when training final model?
|
CC BY-SA 4.0
| null |
2022-09-03T12:43:23.023
|
2022-09-03T17:24:02.897
| null | null |
90341
|
[
"machine-learning",
"neural-network",
"keras",
"machine-learning-model",
"cross-validation"
] |
- Especially for time series work, yes, use your full dataset for training your final model.
- Keep your number of epochs the same as the best performance on your val_loss.
- If you want, you can remove the same period of time from the start of the training data, to ensure the model is given a consistent number of samples to learn over.
This is a big challenge when shipping models to production, as you now have no validation set, how do you know how well it is performing?
- This is where you need to get creative, and use different time-series k-fold splitting strategies.
When working on TS problems that you want to ship to production, I like to use a training, validation and holdout set. So you can test your models true performance.
Otherwise you will be over-fitting on your validation set, through early stopping. (You are leaking information from your validation set, into the keras model training process)
Hope this helps.
|
cross validation on whole data set or training data?
|
Yes, it's called overfitting. Your model is beginning to memorize the training set, but not performing well on any validation or test set. If your question is why is this happening, I'd like to refer you to [another answer](https://stats.stackexchange.com/a/365806/119015) I wrote explaining this phenomenon in more detail.
One interesting question that could be made is why is the performance on the cross-validation folds worse than on the test set?
This is a bit more tough to answer, because I don't have all the details. Some possible explanations could be that the since the training set is larger than each fold, the model was trained better, or that simply the test set examples happened to be easier.
|
114071
|
1
|
114139
| null |
1
|
74
|
What is best practice for applying traditional NLP extraction techniques a pre-processing for ML models?
Given a pipeline:
- Collect raw data.
- Parse full data set with a variety of traditional NLP techniques, to create model-compatible features (e.g. one-hot encoded matrix of entity extraction).
- Train a ML model on the data.
My intuition says you must split the data inbetween step 1 and 2, for example, only running TF-IDF or NMF on your training set.
But, I have seen a lot in papers and production, that non-deep learning NLP techniques are often used before a data split.
|
Avoid leakage in NLP extraction
|
CC BY-SA 4.0
| null |
2022-09-03T17:09:21.783
|
2022-09-06T12:36:54.100
| null | null |
82468
|
[
"nlp",
"training",
"model-evaluations",
"data-leakage"
] |
It is best practice to split the data into train and test datasets. Make modeling choices only on the train data set. Evaluate the usefulness of those choices on the test dataset.
Traditional NLP extraction techniques follow the same logic because they often have modeling choices. One example is the number of topics in non-negative Matrix Factorization (NMF). It is best practice to choose the number of topics on the training dataset, and then evaluate the quality of those topics on the test dataset.
The same logic holds true when estimating a statistic and then making modeling choices on that statistic. Tf–idf (term frequency-inverse document frequency) is a common example. It is best practice to estimate tf-idf on the training set only because later modeling choices are made (or not made) based on tf-idf statistics.
|
what can be done using NLP for a small sentence samples?
|
First I think it's worth mentioning that in the context of an exploratory study with a small dataset, manual analysis is certainly as useful as applying NLP methods (if not more) since:
- Small size is an advantage for manual study and a disadvantage for automatic methods.
- There's no particular goal other than uncovering general patterns or insights, so it's unlikely that the results of an automatic unsupervised method would exhibit anything not directly observable.
That being said one can always apply automatic methods indeed, if only for the sake of observing what they can capture or not.
- Observing frequency (point 1) can always be useful. You may consider variants with/without stop words and using document frequency (number of documents containing a term) instead of term frequency.
- points 3 and 5 are closely related: LDA essentially clusters the sentences by their similarity using conditional words probabilities as hidden variable. But the small size makes things difficult for any probabilistic method, and there could be many sentences which have little in common with any other.
- Syntactic analysis with dependency parsing can perfectly be applied to any sentence, but the question is what for? As far as I know this kind of advanced analysis is not used for exploratory study, it's used for specific applications where one needs to obtain a detailed representation of the full sentence. Traditionally this was used for higher-level tasks involving semantics, often together with semantic role labeling and/or relation extraction. I'm not even sure that this kind of symbolic representation is still in use now that end-to-end neural methods have become state of the art in most applications.
- I agree that summarizing a short sentence is pointless. You could try to summarize the whole set of sentences though, if that makes sense.
In the logic of playing with any possible NLP method, you could add a few things to your list:
- Lemmatizing the words, this can actually be useful as preprocessing.
- Using embeddings or not: on the one hand this can help finding semantic similarities through the embedding space, on the other hand the small size makes it questionable to project the data in a high dimension space.
- Finding colocations (words which tend to appear together in the same sentence) with association measures such as Pointwise Mutual Information.
- Spelling correction and/or matching similar words with string similarity measures.
- It's unlikely that there's any interest in it but there are also stylometry methods, i.e. studying the style of the text instead of the content. These range from general style like detecting the level of formality or readability to trying to predict whether two texts were authored by the same person.
|
114074
|
1
|
114075
| null |
2
|
106
|
I found a breast cancer dataset on Kaggle. Here is the link - [https://www.kaggle.com/datasets/reihanenamdari/breast-cancer](https://www.kaggle.com/datasets/reihanenamdari/breast-cancer)
I would like to how could I find out which research papers use this dataset for binary classification.
So far I got only one paper "Breast Cancer Survival Prediction from Imbalanced Dataset with Machine Learning Algorithms" that use this dataset after searching on google scholar.
If there is any technique to find out research papers for a particular dataset, Please let me know.
Thank you.
|
Finding research papers for a dataset
|
CC BY-SA 4.0
| null |
2022-09-03T17:31:13.713
|
2022-09-03T17:44:10.293
| null | null |
63745
|
[
"machine-learning",
"binary-classification"
] |
Super important question.
The reason is that this is not the original source.
If you go to the data -> meta data -> sources, you can see the source is:
`JING TENG, January 18, 2019, "SEER Breast Cancer Data", IEEE Dataport, doi: https://dx.doi.org/10.21227/a9qy-ph35. https://ieee-dataport.org/open-access/seer-breast-cancer-data`
Then searching google datasets for the DOI number, we can click through onto the google scholar link to get the following:
[https://scholar.google.com/scholar?q=%22ieee%20dataport%20org%20open%20access%20seer%20breast%20cancer%20data%22](https://scholar.google.com/scholar?q=%22ieee%20dataport%20org%20open%20access%20seer%20breast%20cancer%20data%22)
|
Where can I find the applied data science research papers?
|
If you're looking for conferences that focus on applied data science and have a high ranking, there are several options you can consider. While it's true that some conferences may have a more theoretical emphasis, there are also reputable conferences that highlight practical and applied aspects of data science. Here are a few suggestions:
- ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD): KDD is one of the premier conferences in data mining and knowledge discovery. It covers a wide range of topics including applied data science, machine learning, data mining, and big data analytics.
- IEEE International Conference on Data Mining (ICDM): ICDM is another top conference in the field of data mining. It brings together researchers and practitioners to discuss the latest advancements in data mining and its applications.
- International Conference on Machine Learning (ICML): While ICML does have a theoretical focus, it also accepts and features applied data science papers. It is a leading conference in the machine learning community and covers a broad range of topics.
- International Joint Conference on Artificial Intelligence (IJCAI): IJCAI is a prestigious conference in the field of artificial intelligence. While it does include theoretical research, it also accepts and showcases applied data science papers.
- International Conference on Data Science and Advanced Analytics (DSAA): DSAA focuses specifically on data science and advanced analytics. It welcomes submissions related to practical applications, data-driven solutions, and real-world case studies.
These conferences are known for their rigorous review process and attract top researchers and practitioners in the field. Keep in mind that acceptance rates for these conferences can be highly competitive, so ensure that your work aligns well with the conference's scope and requirements.
Additionally, you can also explore domain-specific conferences in areas such as healthcare, finance, or industry-specific data science conferences. These conferences often highlight applied research and real-world applications within their respective domains.
Remember to check the websites of these conferences for the most up-to-date information on submission deadlines, conference dates, and paper requirements.
|
114091
|
1
|
114112
| null |
1
|
98
|
I have videos from a computer game. In this computer game, during the rounds, there is a chat box where players can write messages. I want to read the content of this chatbox.
[](https://i.stack.imgur.com/c9nX8.png)
Difficulties are here:
- The chatbox is always different in size, depending on how much has been written.
- Sometimes there is no chatbox at all, because nobody writes anything.
- Sometimes the chatbox is covered by other HUD windows.
- Parts of the video are in the menu or on the desktop. Not all are in the game.
At first I thought I would break the problem down into individual steps.
- Split video into frames
- use an image classifier to see if it is a gamescreen at all.
- cut out approximately where the chatbox could be.
- detect with an object detection in which area the chat is and cut out the picture like this. So that no other HUD elements are in the image.
- use Tesseract for the actual text detection.
But I think this is very complicated. Would it be better to do 2,3,4 directly with object detection? So something like this:
- Split videos into frames
- detect if there is a chat at all and if so where
- crop
- text recognition
Before I label 10.000 images I wanted to ask what is the right approach.
Thanks a lot!
|
Recognize chatbox on game screenshots
|
CC BY-SA 4.0
| null |
2022-09-04T15:04:04.050
|
2022-09-05T10:15:37.950
|
2022-09-04T15:47:42.077
|
140032
|
140032
|
[
"neural-network",
"image-classification",
"convolutional-neural-network",
"object-detection"
] |
Yes, it seems to be the right general approach, however, I recommend to cut down into smaller pieces in order to be very efficient.
First of all, you should ensure that the most important function, text recognition, works well: If you have done good area detection and good screen recognition, but you don't have good text recognition, you would have lost time. Furthermore, there are always tricks to get the right area and the right frame, but having reliable text recognition is more complex.
That's why I would start to train the text recognition function with already cut text areas. The aim is to reach a very good result. Tesseract is probably the best library to do that. Nevertheless, the background is an image, so it is important to check that the text recognition is correct on hundred pictures.
In addition to that, I would recommend starting with 200 random pictures, instead of 10 000. Very often, you can find many mistakes with a first small sample that you could correct, without processing the whole data. Then you increase progressively the amount because there are always unexpected special cases (ex: a life bar with text in the text's background).
It could be interesting to apply object detection to get the bottom menu coordinates, as the text position may differ with screen resolution. But it could be even simple to consider the window size and consider the bottom menu positions as size percentages. Consider also all kinds of user settings (ex: removal of the bottom menu, change of the background transparency, etc.).
|
Which approach for user classification on chat text (classifier, representation, features)?
|
You're asking what ML representation you should use for user-classification of chat text. User-classification is not the usual text-processing task.
It's not strictly necessary to semantically understand what the user is saying, only how they're saying it; so we look for telltale features indicative of a specific user. And we don't necessarily need to use, or solely rely on, the usual text-processing representations like bag-of-words, word-counts, TFIDF and word-vector.
Here are some features which are predictive of the user:
- character length, word length, sentence length of each comment
- typing speed (esp. if you have timestamps in seconds)
- ratio of punctuation (e.g. 17 punctuation symbols in 80 chars = 17/80)
- ratio of capitalization
- ratio of numerals
- ratio of whitespace
- character n-grams (and notice these can pick up e.g. l0ser, f##k, :-) )
- use of Unicode (emojis, symbols e.g. stars)
- ratio of specific punctuation (e.g. how many '.', '!', '?', '*', '#' )
- word-counts, esp. anything statistically anomalous, foreign, slang, insults
- anything else you can think of that seems predictive for these two users, e.g. number of misspelled words per sentence (may be actual typos, or come from predictive swiping on a cellphone)
|
114103
|
1
|
114104
| null |
0
|
33
|
I am working on face emotion detection using FER2013 dataset using tensorflow and vgg16 model.
I am applying t-sne to my training dataset for dimensionality reduction.
My question is that "is dimensionality reduction required for the tensorflow ????
|
Hello guys, is dimension reduction required for tensorflow?
|
CC BY-SA 4.0
| null |
2022-09-05T06:18:53.463
|
2022-09-05T07:12:40.720
| null | null |
139687
|
[
"python",
"deep-learning",
"tensorflow",
"transfer-learning",
"vgg16"
] |
Dimensionality reduction is not related to TensorFlow's CNN training:
- Dimensionality reduction is for unsupervised data clustering and classification. Not sure if you will cluster expressions clearly because they are not the most remarkable points on faces.
- CNN training consists of supervised training where you indicate the expressions to recognize. For instance, all pictures flagged with "happiness" will be trained to detect happy strokes of happiness (around the eyes, smile, etc.).
I would recommend Dimensionality Reduction using the CNN training's output, i.e. the softmax activations.
[https://fr.mathworks.com/help/deeplearning/ug/view-network-behavior-using-tsne.html](https://fr.mathworks.com/help/deeplearning/ug/view-network-behavior-using-tsne.html)
|
Are dimensionality reduction techniques useful in deep learning
|
It highly depends on your task, your data and your network. Basically, `PCA` is a linear transformation of the current features. Suppose your data are images or a kind of data that locality is important. If you use `PCA` you are throwing away those locality information. Consequently, it is clear that people usually do not use them in convolutional networks. For sequential tasks, again it highly depends on your agent whether is online or not. If it is online, you don't have the entire signal from the beginning. Even if you have that for offline tasks, by doing such diminishing transformations you are again throwing away sequential information, I have to say I've not seen the use of them. I guess their main use is in tasks where your problem can be solved using simple `MLPs` which you don't keep sequential or local information. In those tasks due to the fact that you can employ `PCA` which leads to the reduction of highly correlated features, the number of parameters of your training model can be reduced significantly.
|
114134
|
1
|
114141
| null |
1
|
47
|
For example, I have layers that are pretrained. But while predicted, the loss is very high. But not because of pre-trained layers. Because of not pretrained layers. Will every layer be affected by backprop the same?
|
Is backpropagation applied every layer the same?
|
CC BY-SA 4.0
| null |
2022-09-05T21:15:03.687
|
2022-09-06T06:22:20.400
| null | null |
133184
|
[
"deep-learning",
"nlp",
"convolutional-neural-network",
"training",
"backpropagation"
] |
This depends on how you configure the training process:
You can, for instance, freeze the pretrained layers; this implies that only the not pretrained layers will be updated.
You can also set different learning rates to different layers, so that the pretrained layers are assigned a very small learning rate that allows them to be updated but not too fast.
Therefore, backpropagation is the same for all layers but the weight update strategy can be different.
|
Backpropagation with multiple different activation functions
|
In short, all the activation functions in the backpropagation algorithm are evaluated independently through the chain rule, thus, you can mix and match to your hearts content.
---
# What are we optimizing in backpropagation?
Backpropagation allows you to update your weights as a gradient function of the resulting loss. This will tend towards the optimal loss (the highest accuracy). After each forward pass of your training stage, you get an output at the last layer. You then calculate the resulting loss $E$.
The consequence of each of your weights on your final loss is computed using its partial derivative. In other words, this is how much loss is attributed to each weight. How much error can be attributed to that value. The larger this value is the more the weight will change to correct itself (training).
$\frac{\partial E}{\partial w^k_{i, j}}$
How can we compute such a random partial derivative? Using the chain rule of derivatives, and putting together everything that led to our output during the forward pass. Let's look at what led to our output before getting into the backpropagation.
# The forward pass
In the final layer of a 3-layer neural network ($k = 3$), the output ($o$), is a function ($\phi$) of the outputs of the previous layer ($o^2$) and the weights connecting the two layers ($w^2$).
$y_0 = o^3_1 = \phi(a^3_1) = \phi(\sum_{l=1}^n w^2_{l,1}o^2_l)$
The function $\phi$ is the activation function for the current layer. Typically chosen to be something with an easy to calculate derivative.
You can then see that the previous layers' outputs are calculated in the same way.
$o^2_1 = \phi(a^2_1) = \phi(\sum_{l=1}^n w^1_{l,1}o^1_l)$
So the outputs of the third layer can also be written as a function of the outputs of layer 1 by substituting the outputs of layer 2. This point becomes important for how the backpropagation propagates the error along the network.
# Backpropagation
The partial derivative of the error in terms of the weights is broken down using the chain rule into
$\frac{\partial E}{\partial w^k_{i, j}}$ = $\frac{\partial E}{\partial o^k_{j}} \frac{\partial o^k_{j}}{\partial a^k_{j}} \frac{\partial a^k_{j}}{\partial w^k_{i,j}}$.
Let us look at each of these terms separately.
# 1. $\frac{\partial E}{\partial o^k_{j}}$
is the error caused by the output of the previous layer. For the last layer, using R2 loss, the error of the first output node is
$\frac{\partial E}{\partial o^3_{1}} = \frac{\partial E}{\partial y_{1}} = \frac{\partial }{\partial y_{1}} 1/2(\hat{y}_1-y_1)^2 = y_1 - \hat{y}_1$
In words, this is how far our result, $y_1$, from the actual target $\hat{y}_1$.
This is the same for all previous layers, where we need to substitute in the errors propagating through the network, this is written as
$\frac{\partial E}{\partial o^k_{j}} = \sum_{l \in L} (\frac{\partial E}{\partial o^{k+1}_{l}} \frac{\partial o^{k+1}_{l}}{\partial a^{k+1}_{l}} w^{k}_{j,l}) $
where L is the set of all neurons in the next layer $k+1$.
# 2. $\frac{\partial o^k_{j}}{\partial a^k_{j}}$
This is where the current layer's activation function will make a difference. Because we are taking the derivative of the output as a function of its input. And the output is related to the input through the activation function, $\phi$.
$\frac{\partial o^k_{j}}{\partial a^k_{j}} = \frac{\partial \phi(a^k_{j})}{\partial a^k_{j}}$
So just take the derivative of the activation function. For logistic function this is easy and its
$\frac{\partial o^k_{j}}{\partial a^k_{j}} = \frac{\partial \phi(a^k_{j})}{\partial a^k_{j}} = \phi(a_j)(1-\phi(a_j))$
# 3. $\frac{\partial a^k_{j}}{\partial w^k_{i,j}}$
a is simply a linear combination of w and the subsequent layers outputs. Thus,
$\frac{\partial a^k_{j}}{\partial w^k_{i,j}} = o_i$
# Finally
You can see that the activation functions of your layers are evaluated separately in the backpropagation algorithm. They will just be added onto your ever growing back-chain as independent terms within your chain rule.
|
114140
|
1
|
114144
| null |
1
|
61
|
I am trying to perform clustering on the Market-1501 dataset. The approach that I am using is as follows:
- I train a Person-Reid Model (using this repository: Reid-Strong-Baseline)
- Use a version of depth first search for clustering data (not part of the training set) into individual classes.
Although the Rank-1, Rank-5 metrics of the ReID model are very good, the overall effect of clustering is rather disappointing. I am also struggling to find relevant literature that could help me.
Does anyone have any pointers on where I could at least find relevant literature (i.e Person-Reid followed by clustering). Thanks in advance.
PS: I have posted the same question on Stackoverflow. Thought that this would be a more apt place for this discussion.
|
Clustering on Market-1501 dataset
|
CC BY-SA 4.0
| null |
2022-09-06T06:08:37.437
|
2022-09-06T07:37:34.070
| null | null |
140086
|
[
"clustering",
"machine-learning-model",
"pytorch"
] |
Using ReID output seems to be the right approach because you tell the network what to learn, but you have to choose the right output: it should be something like a softmax activation result telling the different possible classifications with scores.
Then, you can use that output to train dimensional reduction algorithms like UMAP or t-SNE: They have good results because they are non-linear, i.e. they are able the clusterize complex correlations between features.
Here is a playground:
[https://projector.tensorflow.org/](https://projector.tensorflow.org/)
Here is an interesting code with fashion images:
[https://github.com/zalandoresearch/fashion-mnist](https://github.com/zalandoresearch/fashion-mnist)
They have also a reproducibility function:
[https://umap-learn.readthedocs.io/en/latest/reproducibility.html](https://umap-learn.readthedocs.io/en/latest/reproducibility.html)
|
Clustering Customer Data
|
The answer could be anything according to your data! As you can not post your data here, I propose to spend some time on EDA to visualize your data from various POVs and see how it looks like. My suggestions:
- Use only price and quantity for a 2-d scatter plot of your customers. In this task you may need feature scaling if the scale of prices and quantities are much different.
- In the plot above, you may use different markers and/or colors to mark category or customer (as one customer can have several entries)
- Convert "date" feature to 3 features, namely, year, month and day. (Using Python modules you may also get the weekday which might be meaningful). Then apply dimensionality reduction methods and visualize your data to get some insight about it.
- Convert date to an ordinal feature (earliest date becomes 0 or 1 and it increases by 1 for each day) and plot total sale for each customer as a time-series and see it. You may do the same for categories. These can also be plotted as cumulative time-series. This can also be done according to year and month.
All above are just supposed to give you insight about the data (sometimes this insight can give you a proper hint for the number of clusters). This insight sometimes determines the analysis approach as well.
If your time-series become very sparse then time-series analysis might not be the best option (you can make it more dense by increasing time-stamp e.g. weekly, monthly, yearly, etc.)
The idea in your comment is pretty nice. You can use this cumulative features and apply dimensionality reduction methods to (again) see the nature of your data. Do not limit to [linear](http://scikit-learn.org/stable/modules/generated/sklearn.decomposition.PCA.html) ones. Try [nonlinear](http://scikit-learn.org/stable/modules/generated/sklearn.manifold.LocallyLinearEmbedding.html) ones as well.
You may create a [graph](https://en.wikipedia.org/wiki/Graph_theory) out of your data and try graph analysis as well. Each customer is a node, so is each product when each edge shows a purchase ([directed](https://en.wikipedia.org/wiki/Directed_graph) from customer to product) and the [weight](https://en.wikipedia.org/wiki/Glossary_of_graph_theory_terms#weighted_graph) of that edge is the price and/or quantity. Then you end up with a [bipartite graph](https://en.wikipedia.org/wiki/Bipartite_graph). [Try some analysis](http://snap.stanford.edu/class/cs224w-2016/projects/cs224w-83-final.pdf) on this graph and see if it helps.
Hope it helps and good luck!
|
114176
|
1
|
114191
| null |
0
|
42
|
I have a multi-class classification problem that is imbalanced. The task is about animal classification.
Since it's imbalanced, I am using macro-F1 metric and the current result that I have is: `51.59`.
The issue that I am facing is that, this task will be considered as a recommending task, where the accuracy of TOP-N is needed. When I compute the TOP-N accuracy, I get the following: `Top-1: 88.58 Top-2: 94.86 Top-3: 96.48`.
As you can see, the accuracy for the TOP-N is totally biased to the majority class, where the gap between the macro-F1 and top-1 is big.
My question is, how can I consider the class imbalance when I calculate the Top-N accuracy?
|
Top N accuracy for an imbalanced multiclass classification problem
|
CC BY-SA 4.0
| null |
2022-09-06T22:45:59.557
|
2022-09-07T05:48:41.603
| null | null |
49456
|
[
"machine-learning",
"classification",
"class-imbalance",
"accuracy"
] |
Sounds like your minority class is being poorly predicted and affecting your macro f1 score (see [this answer for more info](https://datascience.stackexchange.com/questions/40900/whats-the-difference-between-sklearn-f1-score-micro-and-weighted-for-a-mult)
From the [sklearns top k accuracy score documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.top_k_accuracy_score.html) you can pass a list of weights to 'rebalance' the score.
>
sample_weight, array-like of shape (n_samples,), default=None
Sample weights. If None, all samples are given the same weight.
|
Imbalanced data causing mis-classification on multiclass dataset
|
Nice question!
## Some Remarks
For imbalanced data you have different approaches. Most well-established one is [resampling](https://datascience.stackexchange.com/questions/27671/how-do-you-apply-smote-on-text-classification/27758#27758) (Oversampling small classes /underssampling large classes). The other one is to make your classification hierarchical i.e. classify large classes against all others and then classify small classes in second step (classifiers are not supposed to be the same. Try model selection strategies to find the best).
## Practical Answer
I have got acceptable results without resampling the data! So try it but later improve it using resampling methods (statistically they are kind of A MUST).
TFIDF is good for such a problem. Classifiers should be selected through model selection but my experience shows that Logistic Regression and Random Forest work well on this specific problem (however it's just a practical experience).
You may follow the code bellow as it worked simply well then you may try modifying it to improve your results:
```
train = pd.read_csv(...)
test = pd.read_csv(...)
# TFIDF Bag Of Words Model For Text Curpos. Up to 4-grams and 50k Features
vec = TfidfVectorizer(ngram_range=(1,4), max_features=50000)
TrainX = vec.fit_transform(train)
TestX = vec.transform(test)
# Initializing Base Estimators
clf1 = LogisticRegression()
clf2 = RandomForestClassifier(n_estimators=100, max_depth=20, max_features=5000,n_jobs=-1)
# Soft Voting Classifier For Each Column
clf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2)], voting='soft', n_jobs=-1)
clf = clf.fit(TrainX, TrainY)
preds = clf.predict_proba(TestX)[:,1]
```
Please note that the code is abstract so TianX, TrainY,TestX,etc should be properly defined by you.
## Hints
Be careful about what is StopWord. Practically many people (including myself!) made this mistake to remove stop words according to pre-defined lists. That is not right!
Stop words are corpus-sensitive so You need to remove stopwords according to information theoretic concepts (to keep it simple you need to know TFIDF kind of ignores your corpus-specific stopwords. If you need more explanation please let me know to update my answer).
VotingClassifier is a meta-learning strategy in the family of [Ensemble Methods](https://en.wikipedia.org/wiki/Ensemble_learning). They take benefit from different classifiers. Try them as they work pretty well in practice.
Voting schema simply takes the results of different classifiers and return the output of the one which has the highest probability to be right. So kind of democratic approach against dictatorship ;)
Hope it helps!
|
114185
|
1
|
114188
| null |
0
|
233
|
I have an object detection model with my labels and images. I am trying to use the tensorflow ranking metric for MAP, [https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/metrics/MeanAveragePrecisionMetric](https://www.tensorflow.org/ranking/api_docs/python/tfr/keras/metrics/MeanAveragePrecisionMetric). The metric is used when I compile the model but this is the result I get:
```
Epoch 2/220
92/92 [==============================] - 22s 243ms/step - loss: 0.0027 - mean_average_precision_metric: 0.0000e+00 - val_loss: 0.0019 - val_mean_average_precision_metric: 0.0000e+00
Epoch 3/220
92/92 [==============================] - 22s 245ms/step - loss: 0.0014 - mean_average_precision_metric: 0.0000e+00 - val_loss: 7.5579e-04 - val_mean_average_precision_metric: 0.0000e+00
Epoch 4/220
92/92 [==============================] - 23s 247ms/step - loss: 8.7288e-04 - mean_average_precision_metric: 0.0000e+00 - val_loss: 6.7357e-04 - val_mean_average_precision_metric: 0.0000e+00
Epoch 5/220
92/92 [==============================] - 23s 248ms/step - loss: 7.3901e-04 - mean_average_precision_metric: 0.0000e+00 - val_loss: 5.3464e-04 - val_mean_average_precision_metric: 0.0000e+00
```
My labels and images are all normalized as well according to my image dimensions.
```
train_images /= 255
val_images /= 255
test_images /= 255
train_targets /= TARGET_SIZE
val_targets /= TARGET_SIZE
test_targets /= TARGET_SIZE
```
```
model.compile(loss='mse', optimizer='adam', metrics=[tfr.keras.metrics.MeanAveragePrecisionMetric()])
```
Could the metric not be the right way of using it or maybe not meant for my data?
|
Why does my mean average precision metric show as 0.000e+00?
|
CC BY-SA 4.0
| null |
2022-09-06T23:22:45.097
|
2022-09-07T05:19:17.710
| null | null |
138954
|
[
"python",
"tensorflow",
"machine-learning-model"
] |
I would look into whether your loss function is correct. Mean square error is a regression metric (and precision is a classification metric). Something like [categorical cross entropy](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/CategoricalCrossentropy) is probably more suited.
Eitherway as a sanity check I would you can always run a model for say 10 epochs. Then run predictions and calculate the precision manually (or with [sklearns builtin method](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.average_precision_score.html).
|
research papers on mean average precision
|
Here is paper which answer your question [Beinan Wang - A PARALLEL IMPLEMENTATION OF COMPUTING
MEAN AVERAGE PRECISION](https://arxiv.org/pdf/2206.09504v1.pdf)
it has the implementation in pseudocode.
|
114190
|
1
|
114472
| null |
0
|
303
|
I have a csv file that is used as a pandas dataframe, now I only need to insert a dummy row before the dataframe starts like in the screenshot denoted as "Label". How can I do that?
[](https://i.stack.imgur.com/6UB60.png)
|
How to Insert a Row before Dataframe in Pandas
|
CC BY-SA 4.0
| null |
2022-09-07T05:44:29.813
|
2022-09-16T18:30:21.403
| null | null |
54586
|
[
"pandas",
"dataframe",
"csv"
] |
I found the answer by someone in SO, but I can't find it now.
```
df.columns = pd.MultiIndex.from_tuples(
zip([' ']*256,
df))
```
He posted something like this, but doing it directly, will result in an empty row below the columns names row, so I did the following and it worked for me:
```
df.index.name = 'time'
df.index +=1
df.reset_index(inplace=True)
df.columns = pd.MultiIndex.from_tuples(
zip([' ']*256,
df))
df.to_csv(values["-OUTPUT_PATH-"]+'/converted.csv', index=False)
```
|
How do I add a column to a Pandas dataframe based on other rows and columns in the dataframe?
|
One can create a new dataframe having only first entries of new ID, copying num to new column y and merging this with original dataframe:
```
newdf = df.drop_duplicates('id')
newdf['y'] = newdf['num']
newdf = df.merge(newdf, how='outer')
```
However, it will put NaN for non-first id rows:
```
print(newdf)
id num time y
0 A 10 1 10.0
1 A 11 2 NaN
2 A 12 3 NaN
3 B 20 1 20.0
4 B 21 2 NaN
5 B 22 3 NaN
```
One can change these NaN to previous values by following simple loop:
```
tempval = 0 # a variable to store value temporarily
newy=[]
for x in newdf['y']:
if not pd.isnull(x): tempval = x
newy.append(tempval)
newdf['y'] = newy
```
The desired dataframe is obtained:
```
print(newdf)
id num time y
0 A 10 1 10.0
1 A 11 2 10.0
2 A 12 3 10.0
3 B 20 1 20.0
4 B 21 2 20.0
5 B 22 3 20.0
```
Actually, this question belongs to [https://stackoverflow.com/](https://stackoverflow.com/)
|
114234
|
1
|
114239
| null |
1
|
213
|
As part of a data preprocessing step, I'm trying to create a "master pipeline" from two separate pipelines, one for numerical features and one for datetime features. The numerical pipeline removes outlier rows based on an IQR filter, whereas the datetime pipeline doesn't remove any rows, only feature engineers day of week.
The issue arrives when I try to combine these into a master pipeline that performs both of these steps. I've tried using both `ColumnTransformer` and `FeatureUnion`, but both output the same error (7991 is the size of the output after removing numerical outliers, 13400 is the size output size of the datetime pipeline):
```
ValueError: all the input array dimensions for the concatenation axis must match exactly, but along dimension 0, the array at index 0 has size 7991 and the array at index 1 has size 13400
```
These are my pipeline objects:
```
class FeatureSelector(BaseEstimator, TransformerMixin):
def __init__(self, feature_names):
self.feature_names = feature_names
def fit(self, X, y=None):
return self
def transform(self, X):
return X[self.feature_names]
class IQRFilter(BaseEstimator,TransformerMixin):
def __init__(self,factor=2):
self.factor = factor
def outlier_detector(self,X,y=None):
X = pd.Series(X).copy()
q1 = X.quantile(0.25)
q3 = X.quantile(0.75)
iqr = q3 - q1
self.lower_bound.append(q1 - (self.factor * iqr))
self.upper_bound.append(q3 + (self.factor * iqr))
def fit(self,X,y=None):
self.lower_bound = []
self.upper_bound = []
X.apply(self.outlier_detector)
return self
def transform(self,X,y=None):
X = pd.DataFrame(X).copy()
for i in range(X.shape[1]):
x = X.iloc[:, i].copy()
x[(x < self.lower_bound[i]) | (x > self.upper_bound[i])] = 'OUTLIER'
X.iloc[:, i] = x
return X
class RemoveIQROutliers(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def fit(self, X, y=None):
return self
def transform(self, X):
for col in X.columns:
X = X[X[col] != 'OUTLIER']
return X
class ExtractDay(BaseEstimator, TransformerMixin):
def __init__(self):
pass
def is_business_day(self, date):
return bool(len(pd.bdate_range(date, date)))
def fit(self, X, y=None):
return self
def transform(self, X):
X['day_of_week_wdd'] = X['wanted_delivery_date'].dt.dayofweek
return X
```
And these are my two pipelines:
```
numerical_pipeline = Pipeline([
('FeatureSelector', FeatureSelector(num_cols)),
('iqr_filter', IQRFilter()),
('remove_outliers', RemoveIQROutliers()),
('imputer', SimpleImputer(strategy='median')),
('std_scaler', StandardScaler())
])
date_pipeline = Pipeline([
('FeatureSelector', FeatureSelector(date_cols)),
('Extract_day', ExtractDay()),
])
```
Trying to combine them like this causes the mentioned error message:
```
full_pipeline = Pipeline([
('features', FeatureUnion(transformer_list=[
('numerical_pipeline', numerical_pipeline),
('date_pipeline', date_pipeline)
]))
])
full_pipeline.fit_transform(X_train)
```
What is the correct way to go about this?
|
Combining sklearn pipelines with different output shape
|
CC BY-SA 4.0
| null |
2022-09-08T08:54:01.047
|
2022-09-08T13:46:44.893
| null | null |
140063
|
[
"scikit-learn",
"preprocessing",
"pipelines"
] |
`sklearn` doesn't yet really provide a good way to remove rows in pipelines. [SLEP001 proposes it](https://scikit-learn-enhancement-proposals.readthedocs.io/en/latest/slep001/proposal.html#examples-of-usecases-targetted). `imblearn` has some ways to make this work, but it's semantically specific to resampling data. If you don't need to modify the target (if you'll only use this transformer on `X`, and not in a pipeline with a supervised model), you can make this work. One more caveat: you probably won't want to throw away outliers in production, so consider how you'll rework this transformer after training.
The point is that you should wait to remove the rows with `OUTLIER` entries until after you've joined the datetime features back on. (One alternative is to try to pass the information about which rows were removed to the datetime processor, but that would then require a custom alternative to `FunctionUnion` or `ColumnTransformer`.) Unfortunately, despite all of your custom transformers returning dataframes, the ways to recombine them (`ColumnTransformer` and `FeatureUnion`) won't preserve that yet (but see [pandas-out PR](https://github.com/scikit-learn/scikit-learn/pull/23734) and some linked issues/PRs). Until that's remedied, your best bet might be to modify your transformers to accept an `__init__` parameter `columns` on which to operate, removing the `FeatureSelector` step.
```
outlier_prune = Pipeline([
('iqr_filter', IQRFilter(columns=num_cols)),
('remove_outliers', RemoveIQROutliers()),
]) # important: the output of this is a frame
numerical_pipeline = Pipeline([
('imputer', SimpleImputer(strategy='median')),
('std_scaler', StandardScaler())
])
preproc_pipeline = ColumnTransformer([
('numerical_pipeline', numerical_pipeline, num_cols),
('date_eng', ExtractDay(), date_cols),
])
full_pipeline = Pipeline([
('outliers', outlier_prune),
('preproc', preproc_pipeline),
])
```
[](https://i.stack.imgur.com/ml8mX.png)
|
sklearn - How to create a sequential pipeline
|
When you want to do sequential transformations, you should use `Pipeline`.
```
imp_std = Pipeline(
steps=[
('impute', SimpleImputer(strategy='median')),
('scale', StandardScaler()),
]
)
ColumnTransformer(
remainder='passthrough',
transformers=[
('imp_std', imp_std, ['feat_1', 'feat_2']),
('std', StandardScaler(), ['feat_3']),
]
)
```
or
```
imp = ColumnTransformer(
remainder='passthrough',
transformers=[
('imp', SimpleImputer(strategy='median'), ['feat_1', 'feat_2']),
]
)
Pipeline(
steps=[
('imp', imp),
('std', StandardScaler()),
]
)
```
|
114240
|
1
|
114242
| null |
2
|
107
|
I am looking for a suggestion. Is it possible to implement the data preprocessing steps like missing value imputation, outlier detection, normalization, label encoding in parallel? Can I implement cuda/openmp/mpi programming for data preprocessing?
Thank you.
|
Parallel Data preprocessing
|
CC BY-SA 4.0
| null |
2022-09-08T14:07:15.880
|
2022-09-08T15:14:29.047
| null | null |
63745
|
[
"machine-learning",
"parallel",
"cuda"
] |
Yes - there are a lot of approaches. Depending on the language you are using / packages.
Assuming Python:
- Multiprocessing: Dask, pool.map, modin, pandarallel, spark
- GPU: CuDF from RAPIDS
- Multi-GPU: Cudf-Dask
If you have a Nvidia GPU - I would highly recommend the RAPIDs framework, they have plotting, machine learning, dataframes etc...
|
How to preprocess data?
|
You are having a dataset with both continous and categorical data
1.Centre the data
for numerical variable
centering usually done by subtracting mean of the column and some times by minimum value of the column
2.Scaling
Scaling the data means converting range of data between 0 and 1.
It is done by different methods some follows by dividing with range and some follows by dividing with variance(unit var)
3.Skewness
Check for the skewness of the variable in given data.If the skewness factor is not around zero then try to perform data tranformations using exponential transformations , Box cox and logarithmic e.t.c
4.One on n encoding
Its is also called as one hot encoding also it is used to encode categorical variables that to nominal variables
or else if you can use Equilateral encoding etc.
for ordinal variable you can encode them as increasing or decreasing order
5.Feature Selection and Importance
If you want to remove unwanted variables in your data then you use any feature selection algorithm like recursive feature selection or feature importance by trees etc.
6.Dimensionality Reduction
To reduce the number of dimensions in your data go with algorithms like Principal Component Analysis (unsupervised) or Partial Least Squares(Supervised)
and select the number of dimensions which can nearly describe variance of your data.
7.Removal of outliers
Outliers are the portion of your data you are not explored in some situations
to remove outliers you can go with techniques like Spatial Sign and some other techniques.
8.Missing Values
Missing values are the most common problem in data science.In order to over come to that you can impute values using different approaches like knn impute and build some model to predict missing data using other variables.
9.Binning Data
This is like converting continous data into categorical or interval data this sounds interesting but often leads to loss of valuable information
These are some of the important and basic steps of data preprocessing for majority of the algorithms but some algorithms doesn't need some of the steps like
Random forest accepting factor values (so no need of one on n encoding)
XgBoost accepting missing Values etc.
|
114241
|
1
|
114277
| null |
0
|
837
|
I am trying to upgrade code for custom environment written in gym==0.18.0 to latest version of gym. My current action space and observation space are defined as
self.observation_space = np.ndarray(shape=(24,))
self.action_space = [0, 1]
I understand that in the new version the spaces have to be inherited from gym.spaces class. Can someone help me on how to rewrite my spaces (observation/action) to implement the gym.spaces?
Thanks
|
How to create custom action space in openai.gym
|
CC BY-SA 4.0
| null |
2022-09-08T14:35:12.237
|
2022-09-09T18:11:54.463
| null | null |
99761
|
[
"machine-learning",
"reinforcement-learning",
"openai-gym"
] |
In the case of a 1D observation space, it could be something like:
```
self.observation_shape = (24, 1, 3)
self.observation_space = spaces.Box(low = np.zeros(self.observation_shape), high = np.ones(self.observation_shape),dtype = np.float16)
self.action_space = spaces.Discrete(3,)
```
See also: [https://blog.paperspace.com/creating-custom-environments-openai-gym/](https://blog.paperspace.com/creating-custom-environments-openai-gym/)
|
openai gym - what is an agent I can use with a multi-discrete action space?
|
OpenAI Baselines - or for me even better, Stable Baselines - has many model options which can handle MultiDicrete Action and/or Observation spaces. Building a custom gym environment is also quite straightforward.
|
114269
|
1
|
114278
| null |
1
|
52
|
I'm working with genomics data; I have a multi-class label with a matrix of numeric values (rows are the samples). Each sample may have different metadata which are not being used for training nor testing. For example, each sample may be treated with a dosage value of 50 or 100, etc. The classification model works well using lda or rf. I am open to use any.
I have about four (dosage, tissue, etc) of these metadata and would like to know which of them are influencing the model and by how much.
|
How do I find out which metadata is affecting/influencing the classification model?
|
CC BY-SA 4.0
| null |
2022-09-09T14:38:23.833
|
2022-09-09T18:45:52.830
|
2022-09-09T15:37:19.127
|
139792
|
139792
|
[
"machine-learning",
"classification"
] |
3 options come to mind that address your problem directly, in prority order:
- Add the meta data as features to your dataset, if the feature importance is high for those features - then you are proving some relationship.
- Treat the meta data as targets, "can you use your current features to predict this?"
- Plot your results by meta data feature. I would start with a parallel coordinates plot: https://plotly.com/python/parallel-coordinates-plot/
Where each label is a meta data feature + one of the labels being your target (see plot below)
- In addition to this, you can run a large number of statistical tests to quantify this relationship.
Dummy code for plot:
```
import plotly.express as px
df = px.data.iris()
fig = px.parallel_coordinates(df, color="species_id", labels={"species_id": "Target",
"sepal_width": "dosage", "sepal_length": "tissue",
"petal_width": "day_of_week", "petal_length": "time", },
color_continuous_scale=px.colors.diverging.Tealrose,
color_continuous_midpoint=2)
fig.update_layout(font=dict(size=22))
fig.show()
```
|
How to add incorporate meta data into text classification?
|
Some models cannot really handle this, while others lend themselves for it easily. I'll explain two approaches that you could use:
Naive bayes
With Naive Bayes you can use other categorical values as well as your normal n-grams or sparse bag of words vectors. Just add them one-hot encoded to your features and it is also incorporated. With numerical features you would need to use something like Gaussian Naive Bayes, to fit a distribution to your features per target class, then you can use the likelihoods of these features per class to compute the probabilities.
Neural network
If you use a neural network approach like CNNs or RNNs, you can add any type of feature representation network and concatenate it somewhere in your original network. In your case you would have a softmax at the end of your RNN. Before this, concatenate the output of your 'normal-feature' neural network, add some dense layers and feed this to your softmax output layer. This way you can train your model end-to-end and it will learn important interactions as well.
|
114292
|
1
|
114293
| null |
1
|
371
|
I have a CSV file (such as test1.csv). There are tabular values like the following.
```
S1 S2 S3
4.6 3.2 2.1
3.2 4.3 5.4
1.4 3.4 6.1
```
I want to do mathematical operations, such as `R1=(S1+S2)/1.5` and `R2=(S2+S3)/2.5`. Then I want to save the results `R1` and `R2` in a new CSV file (such as test2.csv). I tried with the following code. That does not work.
```
import pandas as pd
df = pd.read_csv('test1.csv')
df2['R1'] = (df['S1'] + df['S2'])/1.5
df2['R2'] = (df['S2'] + df['S3'])/2.5
df2.to_csv('test2.csv')
```
|
How can I do mathematical operations to two columns of a CSV file and save the result in a new CSV file?
|
CC BY-SA 4.0
| null |
2022-09-10T14:11:05.750
|
2022-09-30T05:29:38.090
|
2022-09-10T14:18:37.327
|
140237
|
140237
|
[
"python",
"pandas"
] |
You did not define `df2` before attempting to use it.
Try this:
```
import pandas as pd
df = pd.read_csv('test1.csv')
df2 = pd.DataFrame({})
df2['R1'] = (df['S1'] + df['S2'])/1.5
df2['R2'] = (df['S2'] + df['S3'])/2.5
df2.to_csv('test2.csv')
```
|
How to create column for my csv file in python
|
You can use Pandas for this, your file format isn't exactly comma-separated values file. But still you can use pandas [read_csv()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.read_csv.html) method.
Suppose your file name is test_file
```
import pandas as pd
df = pd.read_csv('test_file', sep=':', header=None)
>>> df
0 1
0 I 30n
1 J 0n
2 J 0n
3 U 1000n
4 C 0n
5 I 12n
6 I 10n
7 I 10n
8 I 10n
9 I 10n
```
Then you can use the [pivot()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html#pandas-dataframe-pivot) function:
```
>>> df.pivot(columns=0)
\ 1
0 C I J U
0 NaN 30n NaN NaN
1 NaN NaN 0n NaN
2 NaN NaN 0n NaN
3 NaN NaN NaN 1000n
4 0n NaN NaN NaN
5 NaN 12n NaN NaN
6 NaN 10n NaN NaN
7 NaN 10n NaN NaN
8 NaN 10n NaN NaN
9 NaN 10n NaN NaN
```
If your intention is to write it back to a file you can use the [to_csv()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.pivot.html#pandas-dataframe-pivot) method.
```
# this row eliminates the level headers of the columns at level 0
>>> df.columns=df.columns.get_level_values(1)
>>> df
0 C I J U
0 30n
1 0n
2 0n
3 1000n
4 0n
5 12n
6 10n
7 10n
8 10n
9 10n
>>> df.to_csv('new_test_file', index=False)
```
OR
If you wish to make it less sparse, you can first turn it into a dict and then back to DataFrame:
```
>>> _dict = df.groupby(0)[1].apply(list).to_dict()
>>> _dict
{'C': ['0n'], 'I': ['30n', '12n', '10n', '10n', '10n', '10n'], 'J': ['0n', '0n'], 'U': ['1000n']}
>>> pd.DataFrame.from_dict(_dict, orient='index')
0 1 2 3 4 5
C 0n None None None None None
I 30n 12n 10n 10n 10n 10n
J 0n 0n None None None None
U 1000n None None None None None
>>> pd.DataFrame.from_dict(_dict, orient='index').T
C I J U
0 0n 30n 0n 1000n
1 None 12n 0n None
2 None 10n None None
3 None 10n None None
4 None 10n None None
5 None 10n None None
```
[pd.Series.to_dict()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.to_dict.html#pandas-series-to-dict)
[pd.DataFrame.from_dict()](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.from_dict.html#pandas-dataframe-from-dict)
[pd.DataFrame.T](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.transpose.html#pandas-dataframe-transpose)
|
114294
|
1
|
114301
| null |
0
|
354
|
I read [here](https://datascience.stackexchange.com/questions/48693/perform-k-means-clustering-over-multiple-columns) how to show the number of clusters over $n$ columns.
I would like to know how to get in a table, the values of the clusters centers. Could someone help me with this?
|
Perform k-means clustering over multiple columns and get the cluster center values?
|
CC BY-SA 4.0
| null |
2022-09-10T15:07:59.933
|
2022-09-18T23:03:23.733
|
2022-09-18T23:03:23.733
|
29169
|
140272
|
[
"python",
"clustering",
"k-means"
] |
`sklearn.clusters.KMeans` has an attribute `cluster_centers_`, which stores the array of cluster centers.
You can add them to the dataframe as new columns this way:
```
clusters = KMeans(n_clusters = n)
predict = clusters.fit_predict(data)
centers = pd.DataFrame(clusters.cluster_centers_[predict, :])
centers.index = data.index
data = pd.concat([data, centers], axis=1)
```
|
Kmeans clustering with multiple columns containing strings
|
After some more research we found this library: [https://github.com/nicodv/kmodes](https://github.com/nicodv/kmodes).
The library k-modes is used for clustering categorical variables. It defines clusters based on the number of matching categories between data points. (This is in contrast to the more well-known k-means algorithm, which clusters numerical data based on Euclidean distance.) The k-prototypes algorithm combines k-modes and k-means and is able to cluster mixed numerical / categorical data.
Because the dataframe contains categorical data we can't visualize it in a scatterplot. So I added the number representing the cluster the row was assigned to, for every row to get some form of visualization.
Normally you can only cluster ordinal data, because clustering happens based on distance. So I don't know to what extent this is reliable.
|
114297
|
1
|
114299
| null |
2
|
596
|
I like to understand what is the accuracy of an imbalanced dataset.
Let's suppose we have a medical dataset and we want to predict the disease among the patients. Say, in an existing dataset 95% of patients do not have a disease, and 5% patients have disease. So clearly, it is an imbalanced dataset. Now, assume our model predicts that all 100 out of 100 patients have no disease.
Accuracy means = (TP+TN)/(TP+TN+FP+FN)
If the model predicts 100 patients do not have a disease and we are predicting disease among the patient then True positive refers to the disease among the patient and True negative refers to no disease among the patient.
In that case accuracy should be (0+100)/(0+100+0+0) = 1.
We are going to predict how many patients have a disease so if we get accuracy 1, does that mean 100% of patients have the disease?
I am taking the example from [5 Techniques to Handle Imbalanced Data For a Classification Problem](https://www.analyticsvidhya.com/blog/2021/06/5-techniques-to-handle-imbalanced-data-for-a-classification-problem/#:%7E:text=Imbalanced%20data%20refers%20to%20those,very%20low%20number%20of%20observations.) . I am not sure at the time of accuracy calculation why they calculate it as (0+95)/(0+95+0+5) = 0.95, if they have already described that their model predicts `all 100 out of 100 patients have no disease.`
I hope I clarified my question. Thank you.
|
How to calculate accuracy of an imbalanced dataset
|
CC BY-SA 4.0
| null |
2022-09-10T18:05:12.993
|
2022-09-10T18:19:39.090
| null | null |
63745
|
[
"machine-learning",
"classification",
"class-imbalance",
"imbalanced-learn"
] |
Accuracy is the number of correct predictions out of the number of possible predictions. In many regards, it is like an exam score: you had an opportunity to get $100\%$ of the points and got $97\%$ or $79\%$ or whatever. The class ratio is not a factor.
In your example, you had $95$ negative patients and $5$ positive. You predicted $100$ negative patients, meaning that you got $95$ correct and $5$ incorrect for an accuracy of $95\%$.
Note that accuracy is a surprisingly problematic measure of performance, and this is true [even when the classes are naturally balanced](https://stats.stackexchange.com/a/312787/247274).
With imbalance, however, accuracy has the potential to mislead in a way that is not present in many other measures of performance, and your example is a good demonstration of that. All your model does is predict the majority class; it does nothing clever. However, your model achieves an accuracy of $95\%$, which sounds like a high $\text{A}$ in school that indicates strong performance.
|
Compare model accuracy when training with imbalanced and balanced data
|
Accuracy is the worst metric you could use for an imbalanced dataset. If you choose accuracy as a metric when you have class imbalance, you will get very high accuracy. This is because the majority class has a higher frequency (or has more number of records) and hence the model will predict the majority class as the prediction majority of the time.
The metric you choose depends on what kind of dataset you have. If your data has class imbalance, you can go for F1 score, AUC score, True positive/True negative rate. They will give a more realistic score rather than accuracy.
Another point to remember is that if you want to balance your dataset, never use downsampling as it results in data loss which is a BIG NO NO. Always use oversampling.
A word of caution though. Some experts believe that undersampling or oversampling is not the way to go when dealing with imbalance. Rather choosing the right metric is enough to deal with it. But other experts say that SMOTE is the way to go. It depends on you on what you think is right although comparing models like you are doing is probably a safe bet.
Other than that you are correct in your procedure to compare both the models.
|
114300
|
1
|
114305
| null |
3
|
210
|
I have a test dataset. The dataset is an imbalanced dataset. The total training instances for the dataset is 543 among them minority class(yes) is 75 and the majority class(No) is 468. The class of interest is minority class(yes). I used the Naive Bayes classifier for prediction. The confusion matrix I got
```
TP TN FP FN
33 391 77 42
```
The total instances for No class are 468, The classifier truly predicted 391 instances as negative. However the total negative class that the classifier predict is 391+42 = 433, Those, 42 false negatives are actually positive class but the classifier predict them as negative. Am I right with this explanation?
Secondly, the classifier predicted 33 instances as true positive. However, total prediction of positive class TP+FP = 33+77 = 110. Now these false positive are actually negative class.
So, if I calculate TP+FN I will get 33+42 = 75 which is the total number of positive instances in the test set.
If I calculate TN+FP I will get 391+77 = 468, which is the total number of negative instances in the test set.
Now, the precision is True positive/(True positive + False positive), As I have mentioned earlier False positive is noting but some negative instances, So, my question is what does precision actually mean?
For recall is True positive/(True positive + False negative), As I have mentioned earlier False negative means positive instances. (True positive+Flase negative ) total number of positive instances. Now, what does it mean by True positive/ Total number of positive instances?
Lastly, in the class imbalance problem if the majority class is our class of interest which metric (precision and recall) should we consider?
Thank you.
|
Precision, recall and importance of them in the imbalance problem
|
CC BY-SA 4.0
| null |
2022-09-10T19:44:11.750
|
2022-09-11T02:23:44.553
| null | null |
63745
|
[
"machine-learning",
"classification",
"class-imbalance",
"metric",
"confusion-matrix"
] |
>
The total instances for No class are 468, The classifier truly predicted 391 instances as negative. However the total negative class that the classifier predict is 391+42 = 433, Those, 42 false negatives are actually positive class but the classifier predict them as negative. Am I right with this explanation?
Yes, this is correct.
>
Now, the precision is True positive/(True positive + False positive), As I have mentioned earlier False positive is noting but some negative instances, So, my question is what does precision actually mean?
Precision represents the proportion of correct instance among the instances predicted as positive. In other words, this is the probability that a case predicted positive is truly positive.
>
For recall is True positive/(True positive + False negative), As I have mentioned earlier False negative means positive instances. (True positive+Flase negative ) total number of positive instances. Now, what does it mean by True positive/ Total number of positive instances?
Recall is the proportion of instance predicted positive by the system among all the truly positive instances. In other words, it represents the probability that the system correctly "finds" that an instance is positive.
>
Lastly, in the class imbalance problem if the majority class is our class of interest which metric (precision and recall) should we consider?
It's pretty rare that the majority class is of interest, usually the minority class is chosen as the positive class. But anyway this wouldn't change the answer: one should use precision and recall (or F1-score if a single value is needed), but in this case one should use a higher precision (number of digits after the comma).
|
Usage of Precision Recall on an unbalanced dataset
|
When evaluating your algorithms, especially when your dataset is unbalanced, you should use more metrics than just accuracy. The accuracy is how many examples you have correctly identified in total. As you have seen if you have an unbalanced dataset where 0.5% of your instances are 1's then this will result in 99.5% accuracy if you blindly set all your outputs as zeroes. This is obviously wrong albeit the high accuracy. The accuracy is calculated as
$Accuracy = \frac{\sum{TP} + \sum{TN}}{\sum{TP} + \sum{TN} + \sum{FP} + \sum{FN}}$
where TP is true positive, TN is true negatives, FP is false positives and FN is false negatives.
If you want to capture the performance of your unbalanced dataset you should look into the percentage of FP and FN you are calculating. You can do this using the sensitivity and the specificity. Calculate the sensitivity as
$Sensitivity = \frac{\sum{TP} }{\sum{TP} + \sum{FN} }$
and the specificity as
$Specificity = \frac{\sum{TN} }{\sum{TN} + \sum{FP} }$.
An ideal classifier should have the accuracy, specificity and sensitivity all be 1. This would mean every sample is correctly classified. In your case where you are getting very high false negatives, you will see that your sensitivity will be very low. This is a measure with which you can state that your algorithm is performing poorly. It is good form to always include these metrics in any statistical study you are doing. Accuracy alone is not sufficient to prove that you are obtaining good results.
Moreover, there is the receiver-operator curve (ROC). This will tell you your false positive rate for any true positive rate. You can then calculate the area under this curve (AUC) to get a comparable metric of performance.
All of these should be used together when exclaiming the performance of your algorithm. The ROC and AUC can be omitted however leaving out the sensitivity and specificity of your algorithm is unwise.
|
114315
|
1
|
114327
| null |
1
|
37
|
Suppose we need to predict a real number in fixed range, for example, [0 .. 5], and our Y can be 3.14, 2.4654 etc.
What is the name of this kind of tasks (to be able to search further) and what are the approaches to solve this problem?
|
Predicting a real number in a fixed range
|
CC BY-SA 4.0
| null |
2022-09-11T11:45:58.180
|
2022-09-11T19:14:26.877
| null | null |
140292
|
[
"regression"
] |
This is a regression problem with a ["limited dependent variable"](https://en.wikipedia.org/wiki/Limited_dependent_variable).
One very common approach is to use a [sigmoid transformation](https://en.wikipedia.org/wiki/Sigmoid_function) as the final step in your model. For example, the logistic transform $f(x) = \frac{5}{1 + e^{-x}}$ is constrained to the interval [0, 5].
|
Why my neural network does not predict decimal values in range [-1,1]? When it is able to predict the integer values
|
First of all, during the 10000 integer test, did you use all the integers from 0 to 9999 in training? If yes, then you have fully covered the whole input range. This means that while testing, you actually feed the network with data that are identical to the training data, therefore the accuracy is very high. What is the result of the network if you train it with 10000 randomly sampled integers in range of 10,000 and then test it with ALL integers in range of 10000? This test will reveal if your encoder generalizes well enough.
Also keep in mind that the smaller the number, the more difficult it is for the network to train due to vanishing gradient. Therefore, decimal numbers can saturate the learning process and lead to lower accuracy. Try changing the batch size (use smaller batches) and see if you decrease your output error.
|
114348
|
1
|
114349
| null |
0
|
58
|
I am a software engineer (currently CTO) specialized on web and mobile applications picking up data science skills. I do this mainly for future projects within my startup that works in digital healthcare.
For this I have started to learn via Coursera, specifically with this [John Hopkin's Data Science specialization](https://www.coursera.org/specializations/jhu-data-science) which uses R programming as a base.
During these years I've seen how Python is normally the programming language that gets more associations with Data Science and ML and I am now hesitating whether to continue with this or pick another one that uses Python.
Is R better than Python to be future ready? I'd like to avoid having to pick up another skill later on because the first one was not sufficient.
What do you guys think?
Thank you in advance.
|
Is R programming a good way to start with Data Science?
|
CC BY-SA 4.0
| null |
2022-09-12T13:31:30.773
|
2022-09-12T14:10:47.900
| null | null |
140335
|
[
"python",
"r",
"learning",
"coursera"
] |
Your question carries the risk to attract opinion- rather than fact-based answers. However, here are a couple of hard facts:
Going by popularity, the [State of Data Science 2021 report](https://www.anaconda.com/state-of-data-science-2021) provides a relatively clear answer:
[](https://i.stack.imgur.com/LfIdF.png)
According to their survey, Python is by far the most popular language in Data Science. Moreover, when compared to R specifically, Python has the advantage of being a general programming language.
Another advantage may be its general popularity as it is currently the most popular programming language [according to this source](https://www.northeastern.edu/graduate/blog/most-popular-programming-languages/), i.e. you may benefit from learning Python beyond Data Science. In contrast, [the Stackoverflow developer survey](https://survey.stackoverflow.co/2022/#most-popular-technologies-language-prof) (not Data Science specific) ranks it below JavaScript, HTML and SQL - but still well ahead of R.
In summary, Python appears to be the by far most popular language for Data Science and is also generally one of the or the most popular language. Therefore, going by popularity Python is the better choice.
|
Pros and Cons of Python and R for Data Science
|
## Interaction - Random Facts
- Both are good stable languages with interesting complementary qualities. You can get much better packages in one and then stitch them with some data from the other. An example is using time series forecasting and decision trees in R and doing data munging in Python.
- Both languages borrow from each other. Even seasoned package developers like Hadley Wickham (Rstudio) borrows from Beautiful Soup (python) to make rvest for web scraping. In addition to that, Yhat borrows from sqldf to make pandasql and many other.
- Rather than reinvent the wheel in the other language developers can focus on innovation because, in the end, the customer does not care which language the code was written, the customer cares for insights.
## Mixing Them Up
AM mentioning few approaches to mix them together-
- Use a Python package rpy2 to use R within Python . [Demo]
- Use Python from within R using the rPython package. [Demo]
- Use Jupyter with the IR Kernel. Python and R and makes the interactivity of iPython available to other languages.
- Use Beaker Notebook. It allows you to switch from one language in one code block to another language in another code block in a streamlined way to pass shared objects.
## Python vs R
Python vs R - This section will answer:
- Which will be better?
- How to choose one over other?
- Specialization
See as I said earlier both are stable and you can choose any or work with both. But when it comes to master one I'll suggest keep these 3-4 guidelines in mind-
### Personal Preference
Choose the language to begin with based on your personal preference, on which comes more naturally to you, which is easier to grasp from the get-go. To give you a sense of what to expect, mathematicians and statisticians tend to prefer R, whereas computer scientists and software engineers tend to favor Python.
### Project selection
You can also make the Python vs. R call based on a project you know you’ll be working on in your data studies. If you’re working with data that’s been gathered and cleaned for you, and your main focus is the analysis of that data, go with R. If you have to work with dirty or jumbled data, or to scrape data from websites, files, or other data sources, you should start learning, or advancing your studies in, Python.
### Collaboration
Once you have the basics of data analysis under your belt, another criterion for evaluating which language to further your skills in is what language your teammates are using. If you’re all literally speaking the same language, it’ll make collaboration—as well as learning from each other—much easier.
### Job market
Jobs calling for skill in Python compared to R have increased similarly over the last few years.
>
Note: Have a look at this infographic by DataCamp. For a better view on it.
## My Rationale
In my case am doing both and using them interactively and Customizing them as per my use. You can get something really interesting in one (as I mentioned above) which will be hardly available in other, so it's better to use both together. This is the best way to bridge the gap between these two.
But in the last, it's your call keep the guidelines, your interest, and scenarios in mind and make a clear view on that.
## Strength & Weaknesses
### R
Strength
>
R is great for prototyping and for statistical analysis.
It has a huge set of libraries available for different statistical type analysis. Check The Comprehensive R Archive.
RStudio IDE is a definitely a big plus. It eases most of the tedious tasks and fastens your workflow.
Weaknesses
>
The syntax could be obscure sometimes.
It is harder to integrate to a production workflow.
In my opinion, it is better suited for consultancy-type tasks.
The libraries documentation isn't always user friendly.
### Python
Strength
>
Python is great for scripting and automating your different data mining pipelines. It is the de facto scripting language nowadays.
It also integrates easily in a production workflow. Besides, it can be used across different parts of your software engineering team
(back-end, cloud architecture etc.).
The scikit-learn library is awesome for machine-learning tasks.
Ipython (and its notebook) is also a powerful tool for exploratory analysis and presentations.
Weaknesses
>
It isn't as thorough for statistical analysis as R, but it has come a long way these recent years
In my opinion, the learning curve is steeper than R, since you can do much more with Python.
## The Conclusion
Use R and Python. Learn how they inter-operate together. Start with one and then add the other to your workflow. As I like to remind myself- "choosing the tools should never be the primary problem". When in doubt, use the one that is available and that gets the work done quickly.
Hope it helps!
Ref- Udacity, Quora, Letustweak, kD, DataCamp
|
114353
|
1
|
114356
| null |
1
|
400
|
I have a dataframe with IDs and booking refs, looking like the simplified example below.
|ID |BookingRef |
|--|----------|
|001 |2019/32323 |
|002 |2011/23232 |
|002 |2017/7u4922 |
In the above example, 001 has one booking and 002 has two bookings in total so the average number of bookings for customers is 1.5.
How could I calculate this for millions of records using python and pandas?
|
Average number of records by ID
|
CC BY-SA 4.0
| null |
2022-09-12T15:33:17.913
|
2022-09-30T05:16:49.837
|
2022-09-14T04:41:38.933
|
135267
|
140341
|
[
"pandas",
"python-3.x"
] |
You can use the [groupby](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html) method to group the dataframe by ID, then [size()](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.size.html) to count the number of rows for each ID. Then use the [mean](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.mean.html) function to get the average:
```
df.groupby('ID').size().mean()
```
|
Find average sequence from a set of sequences
|
One way would be not to approach this as a calculation per session. Most data science solutions like to end up with a number, probability or classification. I suggest you structure your data differently so that you try to answer the question - what next action is likely given the last action.
In order to do this you would have to restructure your session data and use information from across all your sessions. For example, if you compare in how many sessions a player 'buys a gun', and if so record over all those sessions what their next action is, e.g. in 60% they 'play a mission' next. You will then have a probability of their next action based on the number of choices players made in all those sessions.
Once you have those probabilities, you will be able to answer the question, 'What comes next?'. This will in turn enable you to build that most average session that you are after by stepping through a session and building it by the most probable next step.
|
114370
|
1
|
114399
| null |
0
|
41
|
Good evening, I wanted to ask this question about the accent in words or special characters can affect machine learning algorithms. I'm looking to do a job. I would like to know a recommendation for an article or book that I can use as a reference.
|
Accent on special words or characters can affect machine learning algorithms?
|
CC BY-SA 4.0
| null |
2022-09-12T23:57:11.967
|
2022-09-14T18:58:59.320
| null | null | null |
[
"machine-learning",
"machine-learning-model"
] |
The short answer is yes.
The long answer is the following. Machine learning (ML) algorithms are designed to build models from data, but there is a general motto among ML practitioners:
>
"garbage in, garbage out."
That said, if you have trained your model in an intelligent way to distinguish between strange accents on words, then testing the resulting model on sentences with such accents is a good practice; in particular, I'm thinking of Spanish and Italian words, for instance, where a missing accent could give a completely different semantics to such sentences. Nevertheless, a more practical approach in the field of natural language processing (NLP) is to perform a text normalization (e.g., lemmatization) of your text before learning since specific words are less frequent than more general lemmas. For your specific task, there is a [post](https://stackoverflow.com/questions/517923/what-is-the-best-way-to-remove-accents-normalize-in-a-python-unicode-string) that has Python code that could remove accents from words if you are willing to drop the take this journey.
I hope that this gives you more insight into the complexity of NLP tasks.
EDIT
There are papers on accent-related researches, such as [1](https://aclanthology.org/2020.acl-main.345.pdf), [2](https://www.cse.iitb.ac.in/%7Epjyothi/files/IS18b.pdf), and [3](https://aclanthology.org/N06-1029.pdf), among many others.
|
Should i remove french special characters and apostrophes
|
It depends on the data volume you have.
As far as I know, there are 2 cases to have good NLP models:
- Either you have plenty of data (>10 GB as a raw order of magnitude) so that you can build accurate models, even if there are special characters.
- Either you don't have a lot of data (~1GB or less) and you have to simplify it as much as possible, and even improve it (for instance, replace ; by ,). In other words, you compensate the quantity with quality.
Keep in mind that data complexity is correlated with data quantity. The more the data is complex, the more data you need.
In conclusion, if you have a lot of data, you should keep the accents as they are necessary to make differences between words, and some words in french are different with or without accents (ex: tâche, tache, etc.), but any model would differentiate them according to their context (cf. attention mechanism).
If you don't have a lot of data, removing accents would be better, because it would reduce the vocabulary corpus, and hence improve the learning.
Note: There are very good NLP spell checkers available to recover the correct spelling with accents.
|
114379
|
1
|
114386
| null |
1
|
848
|
Given a query sentence, we search and find similar sentences in our corpus using transformer-based models for semantic textual similarity.
- For one query sentence, we might get 200 similar sentences with scores ranging from 0.95 to 0.55.
- For a second query sentence, we might get 200 similar sentences with scores ranging from 0.44 to 0.27.
- For a third query sentence, we might only get 100 similar sentences with scores ranging from 0.71 to 0.11.
In all those cases, is there a way to predict where our threshold should be without losing too many relevant sentences? Having a similarity score of `1.0` does not mean that two documents are 2X more similar than if the score was `0.5`. Is there a way to determine the `topk` (how many of the top scoring sentences we should return) parameter?
|
Threshold determination / prediction for cosine similarity scores
|
CC BY-SA 4.0
| null |
2022-09-13T07:41:14.133
|
2022-09-13T11:49:21.813
|
2022-09-13T11:26:14.360
|
139922
|
139922
|
[
"nlp",
"transformer",
"semantic-similarity"
] |
As far as I know there is no satisfactory answer:
- One uses a threshold in order to avoid having to choose a specific K in a top K approach. The threshold is often selected manually to eliminate the sentences which are really not relevant. This makes this method more suitable for favouring recall, if you ask me.
- Conversely, one uses a "top K" approach in order not to select a threshold. I think K is often selected quite low in order to keep mostly relevant sentences, i.e. it's an approach more suitable for high precision tasks.
The choice depends on the task:
- First, the approach could be chosen based on what is done with the selected sentences: if it's something like question answering, one wants high precision usually. If it's information retrieval, one wants high recall. If it's a search engine, just rank the sentences by decreasing similarity.
- Then for the value itself (K or threshold), the ideal case is to do some hyper-parameter tuning. i.e. testing multiple values and evaluate the results. If this is convenient or doable for the task, then look at a few examples and manually select a value which looks reasonable.
|
Cosine similarity versus dot product as distance metrics
|
Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".
|
114381
|
1
|
114405
| null |
12
|
1960
|
I'm watching a NLP video on Coursera. It's discussing how to calculate the similarity of two vectors. First it discusses calculating the Euclidean distance, then it discusses the cosine similarity. It says that cosine similarity makes more sense when the size of the corpora are different. That's effectively the same explanation as [given here](https://datascience.stackexchange.com/questions/27726/when-to-use-cosine-simlarity-over-euclidean-similarity).
I don't see why we can't scale the vectors depending on the size of the corpora, however. For example in the example from the linked question:
>
User 1 bought 1x eggs, 1x flour and 1x sugar.
User 2 bought 100x eggs, 100x flour and 100x sugar
User 3 bought 1x eggs, 1x Vodka and 1x Red Bull
Vector 1 and 2 clearly have different norms. We could normalize both of them to have length 1. Then the two vectors turn out to be identical and the Euclidean distance becomes 0, achieving results just as good as cosine similarity.
Why is this not done?
|
Why use cosine similarity instead of scaling the vectors when calculating the similarity of vectors?
|
CC BY-SA 4.0
| null |
2022-09-13T09:31:42.990
|
2022-09-14T21:27:55.557
| null | null |
43711
|
[
"machine-learning",
"nlp",
"clustering",
"similarity"
] |
Let $u, v$ be vectors. The "cosine distance" between them is given by
$$d_{\cos}(u, v) = 1 - \frac{u}{\|u\|} \cdot \frac{v}{\|v\|} = 1 - \cos \theta_{u,v},$$
and the proposed "normalized Euclidean distance" is given by
$$d_{NE}(u, v) = \left\| \frac{u}{\|u\|} - \frac{v}{\|v\|} \right\| = d_E(\frac{u}{\|u\|}, \frac{v}{\|v\|}).$$
By various symmetries, both distance measures may be written as a univariate function of the angle $\theta_{u,v}$ between $u$ and $v$. [1] Let's then compare the distances as a function of radian angle deviation $\theta_{u,v}$.
[](https://i.stack.imgur.com/3ZdUv.png)
Evidently, they both have the same fundamental properties that we desire -- strictly increasing monotonicity for $\theta_{u,v} \in [0, \pi]$ and appropriate symmetry and periodicity across $\theta_{u,v}$.
Their shapes are different, however. Euclidean distance disproportionately punishes small deviations in the angles larger than is arguably necessary. Why is this important? Consider that the training algorithm is attempting to reduce the total error across the dataset. With Euclidean distance, law-abiding vectors are unfairly punished ($\frac{1}{2} d_{NE}(\theta_{u,v} = \pi/12) = 0.125$), making it easier for the training algorithm to get away with much more serious crimes ($\frac{1}{2} d_{NE}(\theta_{u,v} = \pi) = 1.000$). That is, under Euclidean distance, 8 law-abiding vectors are just as bad as maximally opposite-facing vectors.
Under cosine distance, justice is meted out with more proportionate fairness so that society (the sum of error across the dataset) as a whole can get better.
---
[1] In fact, $d_{\cos}(u, v) = \frac{1}{2} (d_{NE}(u, v))^2$.
|
Cosine similarity versus dot product as distance metrics
|
Think geometrically. Cosine similarity only cares about angle difference, while dot product cares about angle and magnitude. If you normalize your data to have the same magnitude, the two are indistinguishable. Sometimes it is desirable to ignore the magnitude, hence cosine similarity is nice, but if magnitude plays a role, dot product would be better as a similarity measure. Note that neither of them is a "distance metric".
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.