3448, we make some dummy variables for Title and add them to the dataframe
28220, making feature matrices for modelling
29402, CHECK FLOOR AND MAX FLOOR
4319, Ticket holders with a zero fare
16242, compare our models through visualization
25823, Truncated SVD on continuous variables
28605, YearRemodAdd
1387, Evaluating the model
15914, Surname
22081, Prepare our Data
37507, add Learning Rate annealing
8335, This is normal distribution As a custom let s have a look at correlation matrix
4425, Way better check the skewness of data
19902, Top 10 Sales by Shop and item Combination
10259, Converting categorical values to numeric values
38514, Ngram exploration
19923, Mass Histograms
8826, Feature Engineering Cabin Deck
6914, PIPELINE
15145, One Hot Encoding
34682, Looking at the test set
8871, Removing Irrelevant or High Correlated Columns
13551, Crossing Embarked by PClass and Survived
24834, use KNN
7410, Neighborhood Besides the zoning classfication neighborhood also makes a difference Houses located at Northridge Heights NridgHt have higher sale prices than those in other areas generally but the variance is large The difference between median price of MeadowV neighborhood with the lowest house prices and that of NridgHt is over
37029, let s analyze the price
38086, For one hot encoding we use the onehotencoder from sklearn preprocessing library
20237, Cabin
8934, Total Square Footage
4802, We are dropping the Id feature since it would not add any useful information to the model
31045, Duplicate Sentence
11483, BsmtFinType2 BsmtExposure BsmtFinType1 BsmtCond BsmtQual BsmtFullBath BsmtHalfBath TotalBsmtSF BsmtFinSF1 BsmtFinSF2 BsmtUnfSF
29334, with the PCA variables
20723, BldgType column
14779, SibSp
25020, Beautiful
32685, Now we have two folders containing each train and test images
36056, Count Monthly Mean
42821, Matcher and Bipartite Matching Loss
21404, it s time to separate into train and test database
31661, Observation
34092, The most interesting question regarding manager id is how to derive a manager skill feature
15873, Number of estimators and max depth
1909, Univariate Analysis
1938, Kitchen Quality
42573, Kaggle returns the score rounded to 5 digits meaning that the contribution of a single value to the log loss lies in the range of
26850, Words Counts Insights
16988, Gradient Boosting classifier
16393, Practicing Random Stuff
876, Data wrangling
18134, StackingCVRegressor extends the standard stacking algorithm implemented as StackingRegressor using out of fold predictions to prepare the input data for the level 2 regressor
37459, Remove Punctuation
10755, We can sort these people by Fare in descending order
28822, Misc
6813, Ensembling is the science of combining classifiers to improve the accuracy of a models Moreover it diminushes the variance of our model making it more reliable You can start learning about ensembling here is better than one ensembling models 611ee4fa9bd8
7634, Correlation of predictions
37041, Certain parameters were chosen for the Kaggle kernel
5566, claen our data from outliers
14915, Features Generation
41417, Feature scaling
36390, Example the 3 E and G values for the cat92 be replaced by NaN during encoding
2322, Fitting a Logistic Regression
8825, from both figures I can assume that if a passenger have family onboard the survival rate increase to approximately 50
19866, There we go with remaining dataset after eliminating the outliers
37160, SUBMISSION
32949, Almost zero
12284, Pie Chart
38765, The train accuracy is 83
3712, Handle Missing Values
36920, Observations
14470, and conversion of object data type to categorical is necessary for reducing the space in memory and decrease time in computation
24166, Augmentation
29322, Model 3
20440, bureau
24466, Fake Images
26929, let s train the models
21854, If we make a recap FNNs from my previous notebook had an accuracy of 80 CNNs had and accuracy of almost 90 while RNN reached 97 Lastly LSTMs were the best performing ones 99 accuracy
3340, now check if any pending nan left in age feature
37213, Choose Embedding
9810, Heatmap
16439, Age
28878, We wil use a sliding window of 90 days
20828, If you are assuming that all records are complete and match on the field you desire an inner join do the same thing as an outer join However in the event you are wrong or a mistake is made an outer join followed by a null check catch it Comparing before after of rows for inner join is equivalent but requires keeping track of before after row s Outer join is easier
1350, Quick completing and converting a numeric feature
3429, Embarked is a categorical variable that can take one of three values S Q or C
24035, Visualizing the data that be used for training
7267, Missing Value
13532, For optimal 3 features
24519, Drop all the other missing values
22658, Loss Function
9040, Information Gain of Categorical Norminal Variables
32864, How much each item s price changed from its lowest highest historical price
27456, Some question are written only in lowercase
41992, Locating loc To read the item in a certain row and certain column
23630, Making predictions
20630, Stopwords Punctuations present in real vs fake tweets
31404, The train csv file contains Label column with pixel values If we think it as filename label pair all we need is a filename for each of our data
37674, Checking out how the data looks
2100, Here it looks it makes a difference only in having a regular vs irregular lot shape
16995, Tuning model
21774, Convert the products feature columns into integer values
25887, Histogram plots of number of punctuations in train and test sets
20497, External Image Names
26176, We first need a few things imported
28502, Creating Features
24427, Preprocessing the data
29981, Fit the model
34105, Age group distribution of covid 19 affected population
8159, we ll map the correlation of independent variables so called collinearity
41199, we can feed this data to our model
10347, Label Encoding
18726, use a list comprehension to extract the ids
32382, Getting Data Ready For ML Algorithms
35520, In this part numerical features have been analyzed
27208, In more than half of the patients in our dataset the cancer is found on the torso
38824, Define PyTorch dataset
29928, Plots of Hyperparameters vs Score
35750, alternate data source using top n features from Sequential Feature Selector
6762, Checking Skewness for feature MiscVal
20906, Create CNN model
27890, The event window is located on the x axis where zero stands for the given event day
1605, Fare
4254, Nulls in training set
3691, Submission File
41786, Early stopping terminates the training of the neural network model at an epoch before it becomes over fit We set the patience to epochs 5
41259, Create A New Model
25862, Classification Report of tune sgd model
9883, Correlation Between Embarked Sex Pclass Survived
14903, Pclass Age Sex vs Survived
35634, Nearest Neighbors for our eight image
23846, Taking a total of 378 feature count to 185
38043, Checking for Skewness of the data
27499, Shallow CNN 2 with data augumentation and regularization
14434, Plot Overall Survival
473, Categorical Features
40445, LotArea
23616, Check if there are any missing values
8307, Using Single Classifier
32322, I need only the following three features from the dataframe
1616, Random Forest
39312, Save model and test set
2944, Feature Correlation
33088, Time to rebuilt the train and test set now
8533, Target Variable
27828, Extract xtrain ytrain
22752, Plot Infectious Population and Total Infected Population for Multiple R0
13855, Creating new feature extracting from existing
27331, Shape of files
17949, Encode Name
5038, The upper bound is 466 075 USD let s filter out samples beyond that cut off
23087, 82 of accuracy during a cross validation is a correct score for the first shot in a binary classification
10673, Final Adjustments
24047, Target log transformation
32300, Displays location of a country
42882, Tree plot
17338, XGB
305, XGBoost
33295, Pclass sorter
21117, Scoring
27541, Display the distribution of a multiple continous variable
15408, Indeed there is quite a big difference between the average age of married and unmarried women in all passenger classes
10079, Bivariate Analysis
4562, Checking Skewness
22532, Parch vs Survived
35932, Binning Age
21516, Splitting The Data into train and validation set
32224, This is just a sample of what the different numbers that we re trying to classify look like
36978, Submission Files
30318, For test data set default start end positions to dummy integer
35689, CatBoost Hyperparameter Tuning
37346, Add the remaining layers
3174, create a test function just to comapre the performance
20172, To find optimal combination of parameters to achieve maximum accuracy using GridSearchCV from sklearn library GridSearchCV learn org stable modules generated sklearn model selection GridSearchChtml does exhaustive search over specified parameter values for an estimator
15295, Logistic Regression Model
37231, Lets convert heights to a longer format
13842, Correlating categorical features
31907, Makes predictions
38745, Based on these visualizations we can conclude the following
1936, Bathrooms in house
13756, Obvious that females had a much higher chance of survival as compared to males
22333, Stemming
33581, Inference
8881, We can create another feature where we can monitor the age of house from its selling date to the last time it was remodelled
10309, Quick and Dirty Look at Validation Set
12663, Our test set does not specify which passenger survived and which passenger did not
31433, Checking Being Loss
1095, Categorical variables need to be transformed to numeric variables
16933, Data treatment Feature engineering
26990, Submit
4122, Data Preprocssing and Machine Learning
29789, Sample few noisy and original images
36347, Implement a Neural Network Multiclass Classification
13542, Summary of df train
34020, Heatmap
34512, First we can establish an arbitrary date and then convert the time offset in months into a Pandas timedelta object
1254, Fix skewed features
24349, As we know SWISH activation function recently published by a team at Google
26254, Evaluation Functions
16716, Decision Tree Classifiers
16904, Create a dataframe with Name Fare Pcalss FamilySize
12033, Correlation
4110, It is positively skewed
6788, Fare
32745, From my kernel
8075, For the rest we just use a loop to impute None value
13622, Categorical variables are ones which are not numerical
21589, Filter in pandas only the largest categories
3017, After imputing features with missing values is there any remaining missing values
12603, find correlation between Numeric Variable
4993, Predicting
939, Clean Data
34064, SibSp Survived
25941, XGBoost
2330, Sklearn metrics good ones to know
38211, Resize the flattened images into 28x28x1 pixels images and regularize it by dividing it with highest value ie 255
3731, Drop not so important features
36351, Define the Network Parameters
33703, Color based on Value font
3578, Data Cleaning Missing values
4451, Train the selected model
41471, Determine the Age typical for each passenger class by Sex Val
16256, Misc
36192, Seniority requires DT fecha dato as POSIXct strptime x DT fecha dato format Y m d conversion Sometimes it works sometimes it works more then Kaggle kernel allows
40449, LotConfig
18165, Getting common words from question1 and question2 in dataset
24325, use robustscaler since maybe there are other outliers
27957, For large datasets with many rows one hot encoding can greatly expand the size of the dataset
7726, TotalBsmtSF Total Basement Square Feet
18953, Display distribution of a continous variable for two or more groups
28595, GrLivArea
15849, Family Size
5916, SVR
28410, Generate Predicitons
23888, Finished SquareFeet 12
20454, Application data
20621, Survival Prediction on Test Data
15821, We have deleted all the nan values
25294, using this let us demean the contour data
4723, To starts with I import necessary libraries and loaded the data set with pandas read csv method
41870, Parameters to be investigated
3276, Updating FireplaceQu LotFrontage MasVnrType and MasVnrArea PoolQC MiscFeature Alley Fence Electrical Functional SaleType Exterior1st KitchenQual Exterior2nd
19578, Items Analysis
18336, Moving forward i am going to check standard correlation coefficient between every pair of attributes using the corr method and try to decipher some relationships between variables
26636, We create two indices correct and incorrect for the images in the validation set with class predicted correctly and incorrectly respectively
31854, Trend Features
32150, How to find all the local maxima or peaks in a 1d array
21578, Named aggregations avoids multiindex
4007, We write our own implementation of the algorithm
38668, Decision Tree
32639, URL
28189, Accuracy refers to the percentage of the total predictions our model makes that are completely correct
22826, Shops Analysis
28139, Predicting the given test dataset
38779, Select investment sales from test set predictions
23567, A fresh beginning
30988, The following code repeats this plot for all the of the numeric hyperparameters
25456, Building the top model
13279, We can use Logistic Regression to validate our assumptions and decisions for feature creating and completing goals
1923, Garages
32410, Determining number of iteration
16031, Embarked
1900, DecisionTree Model
7452, Examining Missing Values
3666, Checking that we got rid of all the NaN s
12115, Machine Learning
102, Correlation Matrix and Heatmap
15652, Linear Discriminant Analysis
15927, SibSp
24153, we ll get the names of each product in the morning and afternoon groups in order to recreate the product list in the original chart
13675, Exploring the data
26282, As a model this time we use GradientBoostingRegressor Let s train it and check the Mean Absolute Error
26951, Model Evaluation
36380, Training Testing
30367, Test PIL Image
897, for Random Forest classifier
14560, Highest number of Siblings Spouses were 8 in number boarded from Southampton font
8668, Trying to use embeddings for encoding categorical features
27462, Combos
17871, Classes of some categorical variables
25847, Cleaning Text
4886, You need to do the same changes in test dataset aslo lest merge test and train
30682, Submisson
54, Gradient Boosting
12103, Making Training Validation Test Dataset
6928, Four features have very few values drop them for the first analysis
38670, Gaussian Naive Bayes
16263, SibSp ParCh
38501, Analysis of the Sentiment Column
9474, Setup This function initializes the environment in pycaret and creates the transformation pipeline to prepare the data for modeling and deployment setup must called before executing any other function in pycaret It takes two mandatory parameters dataframe array like sparse matrix and name of the target column All other parameters are optional
6380, Find out standard deviation
13477, Assessing the model s performance based on Cross Validation ROC AUC
38658, Age Range
18962, Display distribution of a continuous variable for multiple categories with hist curve instead of bar
20751, SsnPorch column
29883, Visualization of model outputs for all training data
38968, instead of using all the 400000 word vectors lets use only vectors form words present in the train and test data codes mean that we are giving each unique word a index number and storing in word2idx dictionary and also creating a new embedding dictionary which maps those numbers to a coeff from glove embeddings If the word does not exist in the glove embedding then we give them a random coeffs of same dimension
35128, Working With TIME
21844, Understanding the Model
1048, We gather all the outliers index positions and drop them from the target dataset
4940, let s check if there are any missing values left
21655, Check the equality of 2 series
27216, Submission
40714, Visualize Digits dataset
8094, Pclass vs Survival
42919, Quantitative
25938, Correlation
24739, DBSCAN Density Based Spatial Clustering of Applications with Noise
20589, Random Forest Classifier
34222, We ll make a get items that simply returns our good images
17610, Ensemble votring
9612, Histogram
32143, How to find the maximum value in each row of a numpy array 2d
27762, Lemmatizing and stemming the text
18162, Fields in both input files are
33691, Calendar
12216, And the best parameters are
11101, Drop features where more than 20 records are null
6141, Features engeneering
26944, AMT INCOME TOTAL
15032, Distribution of Survived passengers on Embarked in Train Set
42565, it gets interesting
26865, The gaussian blur works the sme way except it uses a ponderation in the computation of the local mean around a pixel to give more weights to closer pixels
37500, numeric values related to SalePrice
28600, The different categories exhibit a range of average SalePrice s
30382, Avoid leakage take only non overlapping values for training
33813, Improved Model Random Forest
35346, In this case the average values do not vary a lot
37951, The Confirmed Cases are on the left Y axis and the Fatalities on the right Y axis
29718, Whole pipeline with estimator included
21573, Creating a time series dataset for testing
8528, Basement Features
10977, Top influencers
12953, Basic data analysis
33676, Least Last font
11015, Lets map sex first
37655, Save the model and model weights These files going to output folder as expected You can download them
7710, Skewness
35530, In this part a blended model was created with regression models
20256, Even if when I use pytorch for neural networks I feel better if I use numpy
12371, Applying the replacements
8078, Features Simplication
294, Pclass
3596, ElasticNetCV
18948, Relationship between variables with respective to time with range slider
37138, Softmax Activation Function
32570, Objective Function
34044, we proceed to parsing using the function pandas to datetime
26476, For Submission
33287, Age Filler
11659, Naive Bayes
13233, Train data is approximately twice as large as test data
20060, Prepare the submission
34271, Predict
12918, Missing values
9937, am going to replace the missing value in the Fare column by the average fare of according to the Sex p
24672, DATA PIPELINE
18297, preview our function
1171, impute all incongruencies with the most likely value
42059, Using python and math to display max min mean
28489, WOW reduced from 35MB to 24MB
15648, Extra Trees
17255, Load Data
6499, A lot of difference between the selected features
8017, it s Catergorical data
1161, use test data
6011, Seperti yang sudah saya jelaskan di bagian intro saya akan menggunakan algoritma Random Forest Regressor lalu pakai Randomized Search sebagai Hyper Parameter
17942, Preprocessing
24752, Box Cox Transformation of skewed features
34661, Replotting in log10 scale
245, Model and Accuracy
1884, There are 3 values for Embarked S C and Q
22680, Load embeddings
40377, Defining the paths
30884, Improve the model
16701, Create new feature combining existing features
13454, 20 of entries for passenger age are missing
1674, Great now let s have a look at our Survival predictions
32845, Training
28087, Run on training data
6588, Same can be done as follows
24991, As a sanity check checking that the number of features in train X numerical match the total number of numerical features
13277, Creation of training and validation sets
32137, How to convert an array of arrays into a flat 1d array
32869, Train validation split
8293, Decision Tree
2257, Embarked Pclass and Sex
11689, The accuracy of logistic regression model as reported by Kaggle is 77
10160, Stackoverflow developing survey
31643, SCORE
5456, work through a single point sample calculate a 95 Confidence interval
37118, category name
14277, Bagged Desition Tree
15540, Try Other Models SVM and Random Forest
30180, LSTM Model
2935, Data Processing
164, This is good enough but there is a neater way
8479, at this point we not cut any additional outlier but we not make use of the sales price transformation in your log1p and thus avoid the linear pattern of the residuals
16498, KNN
23023, Sell Price Analysis
37017, Top 20 categories by average price
8782, One hot encoding for title
13711, COMPLETENESS ISSUES
36875, features reshaping 1d vector to 2d images
43247, usually variance of is sufficient to explain the variation in data so we first train data by taking the top n principal components which can explaine the variance of
10276, Random forest
15842, SibSp and Parch
38704, After encoding
7307, Observation
29419, we have to stem our text be using SnowballStemmer as it is quite good for the job let s just get to the code
39946, Ridge regression
11048, Over the course of data preprocessing many functions are used that help in extracting features p
42403, Sarima Predictions
20755, MiscFeature column
5806, Solving the problem using XGBRegressor
30941, Visualizing Interest Level Vs Bedrooms
15629, Missing Data
4561, Adding one more important feature
11973, We fill BsmtQual BsmtCond BsmtExposure BsmtFinType1 BsmtFinType2 GarageType GarageFinish GarageQual FireplaceQu GarageCond with None Take a look in the data description
21588, Combine the small categories into a single category named Others using where
20830, we ll extract features CompetitionOpenSince and CompetitionDaysOpen
31685, use the autoencoder to produce denoised images from noisy ones present in x val
18310, Mean Encoding for item id
149, XGBClassifier
12036, A few categorical variables are ordinal variables so let s fix them
9428, swarmplot
2675, MI for Regression
21198, L Model Backward
15611, Name Length
22927, Just one last thing
10036, Extract features from Name
14795, SGD Classifier
5430, Fireplace
5468, From the RF object we can pull feature importance and plot
11046, let us first import out libraries
24310, take a loot to the first prediction
18324, Update parameters
32515, Pre processing the features
34013, No outliers
31686, Seperate models for noise reduction and classification are not very practical hence we are going to combine them into single unit using the Model class
12512, All set Moving on to incorporate this data
665, Decision Tree
9745, Parch
2936, Check the distribusion of Prices
16463, Handling categorical variables
24061, Finished trialwith value with parameters
11951, Our first goal is to create the models and find their best hyperparameters by running the model individually by gridsearch cross validation
3220, 2D Histogram
8139, the features with a lot of missing values have been taken care of move on to the features with fewer missing values
14844, Since some of them are first or second class passengers I decided to remove zero Fares that might confuse my model
15350, Validation
27836, CNN
15876, Or just the best parameters
9007, Deal with Null Values that contain information content
15874, We have set out a total of 4 times 4 16 models over which to search
4891, Wohh that s lot s of title
40151, Interestingly enough there are opened store with no sales on working days
32507, Compiling the Model 2
23815, take the variables with high correlation values and then do some analysis on them
11313, Correlation Matrix
32980, Encoding categorical features
1282, Fare 3 2 6
34906, Check some Null s
23425, we analyze tweets with class 1
1862, Random Forest
956, Output of the First level Predictions
27098, The architecture of VGG16 is kept mostly the same except the Dense layers are removed
20523, Looking for Correlations
2451, Regression
37783, Install LOFO and get the feature importances
31712, Here s my function for splitting up hdf5 model files
32768, Label encoding
25420, Uniform day of week distribution
36672, Each vector have as many dimensions as there are unique words in the SMS corpus We first use SciKit Learn s CountVectorizer This model convert a collection of text documents to a matrix of token counts
21071, Function for preprocessing
19045, Load in the csv file
34418, we analyze tweets with class 1
2793, To analyze the performance of models is to use the evaluate model function which displays a user interface for all of the available plots for a given model It internally uses the plot model function
35608, True label is 9 but predicted is 8
15954, Reading the Dataset
8514, Feature Engineering Creating New Features
27209, A nevus is basically a visible circumscribed chronic lesion of the skin
32498, Train the Model
627, Based on this plot we define a new feature called Bad ticket under which we collect all the ticket numbers that start with digits which suggest less than 25 survival
37195, Performing Label Encoding
1753, Median imputation Comparing the KDE plot for the age of those who perished before imputation against the KDE plot for the age of those who perished after imputation
30397, Fitting
4293, Based only on the total square footage we get a R squared of 0
2918, Split into features and class
36998, Hours of Order in a Day
28133, Splitting training and test set from our training dataset to train our model
40269, We want to over fit a simple model on the dataset
41058, Plotting the a few groups with at least two data points to them I can t really tell if group 1 was created from clustering the characteristics
2136, While the model is not performing very well it is also very easy to tune
24813, Low correlation
20795, Filling Numerical Missing Values
14339, Lets to select only some of the features
36855, Normalization
36290, Logistic Regression
20525, Log Transformation of the target varibale
9124, Set up Categorical Ordinal Variables
17362, Kfold for cross validation
5146, Missing values
27337, Reshaping data as images
42985, Creating Word Cloud of Duplicates and Non Duplicates Question pairs
37707, what s next
6139, Checking the full dataset
5135, Discrete Variables
10959, Normality and skewness
3463, we make some dummy variables for the Deck variable and add them to the dataset
6113, By the way let s fill all the missing years with the date of the houses were built
4715, let s fill in the missing values of the age column
24696, let s define a trainer and add some practical handlers
29463, Number of distinct questions
23251, We keep Passenger Id separate and use it for Submission
1534, Ticket Feature
7430, Compared to grid search randomized search is less time consuming so we start from a wider range of parameters with randomized search
2215, Confusion Matrices for 4 models
14424, Use function cabin fillN to assign Cabin letter based on mean Fare per Cabin
31580, Clusters of binary properties exhibit some kind of correlation
3725, Train Xgboost Regressor
16676, Analyze about fare
35252, Difference variable would be difference between length of selected text and length of whole text
18157, Split the dataset raw features
34232, make our true get bbox and get lbl
2875, On submitting this file on Kaggle we are getting a rmse score of 0
1074, SalePrice is the variable we need to predict let s do some analysis on this variable first
31913, Reshaping Data
11757, The random search for the XGBoost took a long time so I put it in here and changed some things
4147, Mean Median Mode Imputation on Titanic dataset
20448, bureau balance
34273, KNN performance on longitude and latitude data
5852, CORRELATION
20719, LotConfig column
8024, Cabin
28190, Natural Language Processing using NLTK
4701, Modeling
12396, Plotting the residual plot for the model
9807, Checking Missing value is present or not in our dataset
8434, Pool Quality Fill Nulls
30824, Mean Average Precision
19882, Power Transformer Scaler
41535, But what of 3d data
4038, Categorical columns decorations
19324, Data Normalization
16405, On this Contingency Matrix we can do some statistical tests like Chi Square Test
9905, Simple Logistic Regression
3861, Splitting our train data into 70 train dataset and 30 test dataset
26297, Training
5353, Diplay surface relationshiop between multiple values
18575, Most number of passengers were embarked in Southampton
8370, CLASSIFICATION
26412, Analyzing the ditributions for different Pclass reveals that for instance some 3rd class tickets are much more expensive than the average 1st class ticket
24400, Final output the predictions to a competition file format
15232, Here we are deleteting Survived Column cause it is target value to be predicted
31008, Model Design and Achitecture
38101, We need to understand that our data set contains 60 000 traning images and 10000 testing images
552, RandomizedSearchCV and GridSearchCV apply k fold cross validation on a chosen set of parameters
8368, Convert Pclass into categorical variable
15076, Fare Group
15595, Missing Data
9871, I am going to concatenate train and test values in order to find missing values
11784, Cabin
40001, Prepare to start
39024, Split a part of the train dataset for test the algorithms
15271, Decision Tree Algorithm
20157, Extracting label from data
34287, Prediction
9423, Calander Plot
42628, Line plot with Date and ConfirmedCases
31116, Correlation between target and log price
14619, It s your turn
23040, Relationship of Lag Variables
11032, Perceptron
27064, Count Locations
43246, define normalize function for normalizing the data PrincipalComponents function to return top n principal components
22124, Blending
23513, There are 4 elements in the class
31681, Proceeding to train the classifier
15730, Evaluate the Random Search for Hyperparameter Tuning
15328, Lets create a new column of fam using SibSp which means number of Siblings or Spouse and Parch which means number of Parents or Children later we be dropping SibSp and Parch from our data set since these values are alreday being used in Fam
35073, Making predictions using Solution 5
18661, Fit Model
41928, Interestingly we have 12 features which only have a single value in them these are pretty useless for supervised algorithms and should probably be dropped
36801, And lastly we actually parse the example sentence and display its parse tree
4960, Final Training and Prediction
27377, RMSE 1
38936, By mean of both Random forest and Xgboos
42616, we ll specifiy how the model is optimized by choosing the optimization algorithm and the cost or loss function The Adam optimization algorithm works well across a wide range of neural network architectures Adam essentially combined two other successful algorithms gradient descent with momentum and RMSProp For the loss function softmax cross entropy with logits is a good choice for multi class classification
22477, Area chart Unstacked
34703, Shop active months
10861, Getting the scatterplot for the top correlated features
21611, Calculate the difference between each row and the previous diff
30000, Seperation of Features and Labels as well as reshapig for CNN input
37176, Create the predictions
40058, Running models with StratifiedKFold
18129, Random Forest Regressor
27032, Combine the two prob together
29925, gbdt it should be Notice that random search tried gbdt about the same number of times as the other two while Baian optimization tried gbdt much more often
16533, let s plot and analyze the age and survival correlation
43136, CatdogNet 16
16650, Missing Values
5154, go ahead and select a subset of the most predictive features
23883, Almost all are float variables with few object variables
12893, The medians are a little different here although not by much
34753, Vocabulary and Coverage functions
8548, Basement Features
10108, predict a Testing data with our XGB Model
28325, Exmaine the previous application Dataset
12130, Splitting the data back into the original train sub form 1
15011, Passenger Class
27280, Visualize many predictions
19304, Data Preparation
30265, Scikit Learn won t let us set threshold directly but it give us access to decision scores it uses to make predictions
20362, Pearson Correlation Plot
24749, Imputation
35557, Parameters
15984, In this part we scale the numeric features and convert the categorical features into numeric features
7453, Age Column
13893, Data Balance
5904, when to use only fit and fit transform Only fit used below
40412, the latitude values are primarily between and look at the longitude values
16378, categorizing starting from 1
23036, Quarter Analysis
20396, Multinomial Naive Bayes Model
16367, Try Groupby
19582, Concat test into train
34015, January 0
31544, MasVnrType
5193, We can now compare our models and to choose the best one for our problem
30532, Exploration of Bureau Data
19614, Indicator features
14100, Model Comparison
21525, And after the max pooling
39776, create a first simple model that be my baseline model
22273, I choose to fill the missing cabin columns with 0 instead of drop it becuase cabin may be associated with passenger class We have a look at a correlation matrix that includes categorical columns once we have used One Hot Encoding
5163, Bagging boosting
30972, To get a sense of how grid search works we can look at the progression of hyperparameters that were evaluated
32030, we put predicted age data to cells with only missing age values in Age column
22110, Scaling numeric data
26013, Last but not the least Inorder to proceed further in data cleaning and transformations It is always of prime importance to check the distribution of all the numeric variables involved in the study most importantly the target variable SalePrice
32820, Dealing with missed variables
8090, Loading Data
33608, Building Classifier
19529, Applying Function as Filter
28130, MIN DF and MAX DF parameter
9235, Neural Network with Tensorflow
29570, let s try ML now
25675, Feature Analysis
26804, StratifyGroupKFold
1879, Class
11755, Models
6631, Most of the embarkments were from class S
7087, Feature Engeneering
24018, And examples of the wrong classified images
6257, Therefore Fare lends itself to being a good candidate for binning into categories as well
23620, Ensembling
29814, SkipGram Model
6052, Simila distributions but different ranges
41635, Remove Extra Whitespaces
13469, Exploration of Traveling Alone vs With Family
43365, for each prediction there is a vector of 39 probabilities
37093, Categories Display Target Density and Target Probability
37891, Prediction from Linear Model
38479, Class distribution font
17752, Tuning RF parameters somewhat methodically
27369, droping the category name and shop name
5975, GridSearch
7957, Test with higher beta
8393, Creating a new entity Id inside the created EntitySet
27147, Category 4 Location and Style
13087, Decision Surface
16437, Looks like Pclass can help to fill the missing value
7414, We know that Alley FireplaceQu PoolQC Fence MiscFeature are all categorical variables and their missing values occupy over 50 of total
18426, also check if the CV score improved after the stacking from the single models or not
30770, Ensemble learning
27137, Scatterplots
7362, SPLIT THE DATA FOR LINEAR MODEL AND BOOSTS NN
9217, Overall Bivariate Relation
29103, Bidirectional LSTM
13974, Embarked vs Sex
31740, With Gaussian Blur
43039, we are ready to train the model
23424, First we analyze tweets with class 0
40841, max is 3 mean is only 0
41668, The focus of this section be on tuning the following Random Forest hyperparameters in order to prevent overfitting
4343, Dataset summary
22974, Mean sales per week
15385, fill the remaining missing Ages with the mean values
24907, Confirmed COVID 19 cases per day in China
720, One thing to note about this dataset is the lack of data and with it the curse of dimensionality
20956, Evaluating model performance with evaluate method
10965, Adjusting the type of variable
38103, Reshaping and Normalizing the Images
29727, Distribution of target variable
10720, I know what you want next
30091, C O N F I G U R A T I O N
7122, Pclass vs survived
33690, Check if the dates are in US holidays
4852, LightGBM
16766, Look at hte prepared Data
35354, We have total three csv files in this dataset
19836, Exponential transformation
42313, Probabilities Testing Set
10351, Normal distribution doesn t fit so SalePrice need to be transformed before creating the model
30901, After fullfillign the regionidcity let s check what columns are left
22053, Some quick findings
1718, Peeking at Datasets
2659, We have our training data validation data
38029, Logistic Regression
17696, MODELS
29466, Checking for missing values
21146, We have cleaned and scaled data with defined linear correlation
32210, Add lag values for item cnt month for every month shop combination
33889, POS CASH balance loading converting to numeric dropping
5536, Drop Unneeded Columns
9672, Transform the dataset and get it ready for tuning
42006, isin filtering by conditions multi conditions in multi columns
29140, Mutual Information plots
15057, Submit
25453, How d We Do
18007, Indeed the gender is the feature that is most highly correlated with survival among all with correlation coefficient of 0
15724, Decision Tree Classification
18475, It is realistically better to input the median value to the three Nan stores then the mean since the mean is biased by those outliers
28222, Final model s accuracy
23600, I am splitting into training and testing set for now
2951, Fit these best estimators into the model
33032, Using prcomp on the original dataset throws an error
28696, Clearly we have a variety of positive and negative skewing features I transform the features with skew to follow more closely the normal distribution
43000, Here I check number of rows for each ID
27642, But the table of data is not enough as we have to split the label or what we are predicting from the training data or the pixels
36364, Creating pipeline
27647, Creating the Submission
16385, Combining Sex Titles Pclass
42540, Transform questions by TF IDF
32818, Correlation Table of price doc t by methods pearson kendall spearman
13664, Modeling
42053, get group
4919, Looking at Skewed Features
16005, New features Relatives and Age Pclass
38310, Logistic Regression
2445, Correlations
19899, Bottom 10 Sales by Item
9767, Feature Selection
28166, spaCy s Processing Pipeline
20508, The worst toxic train questions
22646, The names are transformed into title
26661, This file contains descriptions for the columns in the various data files
12463, Train and Test sets
14249, Embarked Categorical Feature
42836, We compute the 10 fold cross validation score by using
10821, it is time to predict missing values of Age
22650, Sigmoid
32082, We have 14 continuous variables in our dataset
23937, Verifying if products actually have description
16606, Outliers
21393, Readiness for Submission File
32865, Rolling window based features window 3 months
25728, Image Augumentation
15781, Perceptron
1516, Trying to plot all the numerical features in a seaborn pairplot take us too much time and be hard to interpret
41300, Feature importance
11404, Selecting Multiple Columns
23745, Logistic Regression
17935, Embarked
29422, First we store the target data into a variable
22328, Removing URLs
37916, Evaluation
8303, The best model is XGB in these runs
41227, start applying different algorithm on the train dataset
24694, let s define a single iteration function update fn
15530, Fare
10635, first finish some routine tasks before diving deeper
37833, SVM
41970, Model Building
23528, Below the encoding is applied to every sentence in train
37662, Data loading
41846, XG BOOST
38957, Creating Dataset
43350, The first image in our test dataset is 2
7640, remove outliers
40720, Training
1684, Relationship of a numerical feature with another numerical feature
42325, Converting label into categorical form
3162, The onehot function converts a categorical variable into a set of dummies
14284, Parameter tuning gridSearchCV
5885, Lasso
6042, Create stacked model and make a new submission
23295, Categorical Features
20771, Having obtained our tf idf matrix a sparse matrix object we now apply the TruncatedSVD method to first reduce the dimensionality of the Tf idf matrix to a decomposed feature space referred to in the community as the LSA method
7354, Some feature have wrong format
19196, we have compact small table easy to work with
20081, Worst Sales Item
29112, The data is ready Time to feed it to a Convolutional Neural Network
7709, Target Analysis
13500, Title
20643, Word Embeddings
32139, How to create row numbers grouped by a categorical variable
32239, Always Always Always remember to close the session after you are done with the computations
16226, we drop the columns which we don t require
28078, Unlike the train data there is no missing value in the Embarked column but there is one missing value for fair
2983, Residual plot
42781, Creating the model
37019, Can we split those categories by level
43397, First of all we need some fooling targets
23606, Loss Function
29331, With the PCA values
11116, Split into Train and validation set
43323, Reshape
15754, Modeling
7557, Voting Classifier
12090, Update the model with the second part
35321, Plot loss and accuracy
30284, Active Count 50
22828, Great so all shops id s in the test set are also present in the training set
14825, Embarked
40743, Choosing final Model
9087, MSSubClass and HouseStyle
24912, Time evaluation
31096, GarageCars font
20125, Device model
42417, Outlier Analysis
18914, Age Feature
37746, we can use the dictionary along with a few parameters for the date to read in the data with the correct types in a few lines
30860, In this example we be using the MNIST dataset which is a set of 70 000 small images of digits handwritten
15681, Train first layer
25479, In order to make the optimizer converge faster and closest to the global minimum of the loss function i used an annealing method of the learning rate
22046, Since our metric is RMSLE let us use log of the target variable for model building rather than using the actual target variable
38953, Configuration
22407, Getting closer
229, Library and Data
27152, Mason Veneer Types Most of the properties are not having Masonry veneer walls and have low sale price
43124, Import ML methods for training the models
28110, Measure the model fit against multiple models
43063, check now the distribution of the mean value per row in the train dataset grouped by value of target
32691, Ok let s now train the model
35502, Fit the Model
29362, Building Model
5045, Not surprisingly most positively correlated features are e
43022, Feature selection Matrix extra need bit more understanding and usage
42455, Checking missing values
35463, Visualiza the skin cancer at Palms soles
41909, Optional only keep images of type 0 and 2 2 being the second most present class in this sample
30520, Target Variable with respect to Organization and Occupation Type
31680, Constructing a very simple neural network to classify our images
4608, Things to note
29780, Visualise Training data
31618, F1 score is precision and recall combined into single metric
22606, Final preparations
34022, Count Atemp
41162, FEATURE 8 DEBT OVER CREDIT RATIO
27458, Handle Capitalized Words
15155, converting Categorical data to Numerical
5398, I tell the missing fare by the PClass Parch and SibSp
8080, Overfitting prevention
36939, Fare
38293, Fit the model
16913, Encode categorical
34240, Load Libraries
29930, for the next four hyperparameters versus the score
20408, Number of unique questions
816, log transform
33894, bureau agg previous application agg application train test
35769, Additional testing
2530, Hyper Parameter Tuning for AdaBoost
1632, Log transformation
12778, Start modeling
27832, Normalization
2321, Fitting a RandomForest Regressor
2421, let s take a look at all the categorical features in the data that need to be transformed
39423, map Sex to 0 for male and 1 for female
5254, Drop Column Importance
42642, Mislabeled Samples After Cleaning
20035, Since this is a multiclass classification problem we One Hot Encode the labels
13229, Pseudo Labeling Technique explanation of semi supervised learning and pseudo labeling c2218e8c769b
37911, Evaluation
31349, Take a look at your submission object now by calling
8088, Deal with predictions close to outer range
13503, Correlation
22276, We fill with the mode of the data column
29017, Age
18433, Creation of the histograms
14713, K NEAREST NEIGHBORS
3033, Model Predictions
40688, NOW WE CAN DO SOME FEATURE ENGINEERING AND GET SOME NEW FEATURES AND DROP SOME USELESS OR LESS RELEVANT FEATURES
9245, Correlation matrix of some selected features
15720, Train dataset
530, Boxplot
13607, When handle missing indicator an indicator column be added
42413, First Few Rows Of Dataset
32033, GridSearchCV returns test scores There be 5 because we use 5 fold CV splits for each parameter combination in param grid
32039, Since we want high True Positive Rate and low False Positive Rate we can set the point closest to 0 1 on ROC curve as the optimal operating point
40853, Transformation of Distributions
430, Fence data description says NA means no fence
42237, Univariate analysis box plots for numerical attributes
33446, LGBMClassifier
938, The 4 C s of Data Cleaning Correcting Completing Creating and Converting
30202, we ve included method anova since it is not a classification but a regression problem
3516, examine numerical features in the train dataset
37888, Elastic Net Linear Regression
21501, plot images from the training set of different conditions
27310, Data Conversion
39439, Model Submission
2300, Changing the Column names starting with numbers later functions sometimes have issues
8382, I due with Nan s later but by now I fill with miss
28367, There are 221 unique words present in training set
25815, it s time to combine them
34048, Deal with the cyclic characteristic of Months and Days of Week
28463, Column unitcnt
3671, Observe the correction
33251, Missing Values
41361, According to this chart we can t say there is a clear correlation between garage quality and price
19300, Data Interaction
10685, MSE 1n i 1n yi yi 2
24255, Irrespective of the class passengers embarked in 0 and 2 have lower chance of survival
307, our scores
19896, Grouping training by shop id and Item id
7108, We use logistic regression with the parameter we just tuned to apply bagging
18560, Most passengers don t have cabin numbers
27323, Model building
14531, Cabin
27157, KitchenQual Kitchen quality
35369, Initialize Augmentations
20826, The following extracts particular date fields from a complete datetime for the purpose of constructing categoricals
12357, TotalBsmtSF Total square feet of basement area
21322, T correlation plot th y m t s nh m bi n c s t ng quan m nh
40669, Bi gram Plots
23676, We can take a look at a few random images
78, Cabin Feature
38013, Aggregations over department
7942, A Deep Learning approach Multilayer Perceptron MLP for regression
16042, Yipee 75
37639, We ll also plot the distribution of predictions
2560, Using the Leader board option to arrive at best model
39958, We stack all the previous models including the votingregressor with XGBoost as the meta regressor
2917, Preparing the datasets for Machine Learning
2908, Fill the Age Feature with median
17652, Voting Boosting models
21581, Show fewer rows in a df
39788, Lets look at distribution of logerrors with top 15 frequent regionidzips
21487, Code for Loading Embeddings
15463, Feature importance
18204, Building the Model with Attention font
24883, Creating An Additional Family Feature
26012, One thing that was pinching me in my was the price movement
36818, for training data
19569, Please note that some lines with coco evaluator are commented out
15723, Gaussian Na ve Bayes
27151, Roof Styles Most of the house are having Gable and Hip roof styles and average sale price of 1
14803, Categorical Variable
40969, Creating new column Daily Revenue
31366, Fold Cross Validation
16870, Residues of train dataset view
25186, Reading the data
28504, We build our model using high level Keras API which uses Teensorflow on the backened
22468, Timeseries
37300, N Grams
9788, Since there are many features it can be hard to understand correlations from the heatmap
42774, Fill Age
20304, We are dealing with 143 types of product
4463, Sex Mapping and Encoding
7598, Boxplot SalePrice for Neighborhood
547, Optimization of Classifier parameters Boosting Voting and Stacking
34488, Write the functions for model optimizer activation and loss
40770, Function used to plot 9 images in a 3x3 grid and writing the true and predicted classes below each image
31236, Checking for correlation between features
2120, Tuning Lasso
23293, Outlier Detection
26181, Catagorical Variables
21771, REVIEW The values is assigned like float
7393, Splitting in training and testing datasets
25673, Improve the model
16345, Create Submission File for Kaggle Competition
15528, Embarked
6451, Prediction on test data of different model
9858, When we add count variable indexes start from
29713, Boxplot allow us to have a better idea about the presence of outliers and how badly they may affect our predictions later
24447, remove stopwords pass to lower add delimiter and more
38132, Those that survived had paid in the fare range of
24982, Filling NaNs in categorical columns using most frequent occurrence for each column
38944, Effect of Competition Distance on stores performance
3930, Checking Models Accuracy
747, Running the CV for the ensemble regressor crashes it get an indication on one regressor
17460, We are going to drop columns that we not use in Machine learning process
1640, My final ensemble model is an average of Gradient Boosting and Elastic Net predictions
43024, its time to turn everything into numbers
32568, Save data with duplicates
40676, let s try doing it for K 9
26568, Change the numbers in the image name to look at a few other cats and get an overview of what we are working with
43306, and after a few hours of trial error I have chosen what appears to be the optimum way to handle nulls in the numerical columns
32526, Train the Model
22286, Submission To CSV
30867, We can get a better sense for one of these examples by visualising the image and looking at the label
32730, item id date block num and month have quite a predictive power
15456, Family Survived
11760, I took some suggestions from the documentation of scikit learn and some other helpful kernels here on Kaggle to tune our Gradient Boosting Regressor and Kernel Ridge Regressor
40480, Perceptron
21127, In this case obviously all dispersion measures are low cause the difference of 4 6 years in comparison to 2000 is small
1103, Feature importance
40241, Or as I like to call it smell the data before dinner
31830, Under sampling Cluster Centroids
41400, NAME INCOME TYPE
17468, Mlle The term Mademoiselle is a French familiar title abbreviated Mlle traditionally given to an unmarried woman
41525, A boxplot is a standardized way of displaying the distribution of data based on a five number summary median third quartile and maximum
40170, The Core Data Science team at Facebook recently published a new procedure for forecasting time series data called
16660, Analyzing Features
14826, Ticket
443, Lets check for any missing values
27517, Modelling
3160, Copy NeighborhoodBin into a temporary DataFrame because we want to use the unscaled version later on to one hot encode it
26450, A negative value means that our model might work better if we do not consider the respective feature
2007, Looks good
8100, SibSp vs Survival
28212, Calculate class weights
13124, features analysis
975, let s create a Light GBM Model
16004, Floors
36589, Use all training data learning rate
42072, Each of the models are tuned using Random Search
41442, Below is the 10 tokens with the lowest tfidf score which is unsurprisingly very generic words that we could not use to distinguish one description from another
17999, Processing the training and test set seperately
7581, Scatterplots SalePrice vs Area features
29835, Load order data
3443, Remember that ultimately we d like to use the Title information to impute missing values of Age
30964, Learning Rate Domain
12789, We extract the relevant feature from the test data as we have done before
20350, I pick 0
23527, Load the Multilingual Encoder module
17449, i like to split the data in a training and a test dataset just to be sure my AIs work
24666, Write prediction
12979, Survival probability of small families is almost three times higher than big families
40775, Run the training and prediction code
30425, Mixup 2
41322, The general metric for MNIST is a simple accuracy score
13283, In machine learning Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes theorem with strong naive independence assumptions between the features Naive Bayes classifiers are highly scalable requiring a number of parameters linear in the number of variables features in a learning problem Reference Wikipedia
41063, Basline Model Multilayer Perceptron
34484, There is a fairly even distribution for each of the digit in our dataset which is actually good since there be no bias to a particular digit when our CNN is trained
11875, Dealining with Scewed data
15001, We have several object columns that we need to deal with Name Sex Ticket Cabin and Embarked
14813, Pclass Survived
15238, Missing values
17802, And let s plot the Fare clusters grouped by the survived and not survived
43150, Preparing Files to be given for training
20446, Go to top font
38427, Data augmentation
22096, Load the Model with the Lowest Validation Loss
7518, Permuation importance of features
41949, Visualization Code
17359, Model Evaluation and Comparison
1734, Gender vs Embarked
26080, You can use GPU to accelerate model training
38719, Here is where we finally train the GAN
1428, Logistic Regression
20122, Feature Importance
27250, Prepare the test set with the date information required for prediction
8932, Remaining Features
9182, YearRemodAdd
6272, Embarked
36820, Preparing the Data
2805, get num feats Returns the numerical features in a data set
9162, GrLivArea
17547, Get data of Passengers with Pclass 3 having 0 Parch and 0 SibSp simliar to the requirement fare null value
6829, Numerical Features
4881, You can skip arguments other than x cmap is styling the heatmap
40931, Callbacks Functions
21937, Python
540, New Feature Title
42722, At first find the int features which have high correlation with target
15869, Test model
8843, For columns with no ordinal relationship we ll do some special processing later
26558, Evaluation
29044, Read image
8778, Embarked And Sex
13721, NOTE Found unmatching titles like Don Rev Mme Ms Major Lady Sir Mlle Col Capt Countess Jonkheer
17358, The next model Random Forests is one of the most popular
28995, Highly Correlated variables
5142, Number of labels cardinality
27093, to easily test the blended model first I saved all the models already fitted so I could retrieve the models without running all the process again and also to save modifications made on the models
14885, Numeric Variables
32877, XGBoost
15015, Passengers from Cherbourg have the higher probability of surviving
3214, Checking the learning process
18207, Calculate OOF AUC font
27329, Make a submission file
10592, Gradient Boosting with params
663, Naive Bayes
40945, now let us prepare five learning models as our first level classification
32356, Predictions
9678, L1 and L2 regularization
18192, expanding the aggregate
8314, Imputing the missing categorical value with the most frequent value
13577, Seting Cabin into Groups
20083, Worst Sales Shop
21811, Interest Level
25204, Submission
3593, Ridge
41023, we submit the predictions to the leaderboard at the time I first tried doing this the old leaderboard with only 50 of test data was still in use
28155, Load Dataset
22791, Import Libraries
4386, GarageArea feature look uniform distribution and linearly correaltion with target SalesPrice
27303, Now without the most popular product ind cco fin ult1
40309, LSTM models
3667, Splitting the data into categorial and numerical features
9688, Coorelation matrix and removing multicollinearity
28158, Train Model
26240, Pseudo Labels for Model 2
18326, Using best lgbm model
41263, The administrative regions and municipalities are distributed as follows
32095, How to stack two arrays vertically
24719, Show Model Accuracy as function of num PCA components
4388, Most of the home doesn t have pool
39179, apply it to some predictions
39193, Descartando colunas que n o utilizaremos
6018, Hasilnya memang overfit tapi ingat ini hanya base model saja lagipula perbedaannya hanya 1 2 saja di tingkat akurasi data validation dan test
1080, Modeling
17888, Getting title from name
18194, Similarly there are products with 0 sales and only returns
1894, SVC Model
2762, Usually Categorical Variables are imputed with mode but it won t make sense in all cases so in order to make them loaclized based on Neighborhood and we can impute the data
39738, Ticket
12215, The grid search looks like this
39877, stripplot It is easy to understand the overall appearance when there is not much data in data set
22464, Waffle chart
5115, Missing Value Counts
13978, Preprocess Name
41339, Categorical Features
10628, Split Train and test Data
3442, It s reasonable to put both of these in the Mrs category
29715, Caution Correlation matrix won t help us detect non linear or multi variate feature relations though
16575, Creating New Features
22962, Examples of hair augmentation with TensorFlow
21353, Specify Model
23318, Add previous item sales as feature Lag feature
39228, Lets remove correlated features from Santander database
2455, We then use the transform function to reduce the training and testing sets
29906, Convolutional Neural Network
33250, Categorical Features
24118, Final prediction
21792, Logistic Regression
16922, Random Forest Best Performing on LB
5895, Label Encoding these
10335, Imputation of Missing Values
14672, Hyperparameter Tuning
15299, Support Vector Machine Model
19661, DFS with Selected Aggregation Primitives
20605, Age
10127, Building Training and Validating our Models
22355, Using xgboost XGBRegressor to train the data and predict loss values on the test subset
14311, Writing the Prediction
6151, CatBoost
10785, I want to set index as PassengerId
26905, Create Submission File for approach 10
33693, Moving Average
14127, Here in both training set and test set the average fare closest to 80 are in the C Embarked values
26511, when all operations for every variable are defined in TensorFlow graph all computations be performed outside Python environment
40447, Summary
21897, Parameters
205, Libraries and data
23451, From this we can conclude that the registered are mostly people going on their jobs which explains the peaks at the start and end of office hours Clearly these people would have a more definite and predictible schedule and are therefore more likely to be registered In order to test this hypothesis we plot some more graphs
16950, And voila We trained the model its time to save the predictions
21262, Check Item Similarity
567, Ada Boost
41491, Use the new training data to fit the model predict and get the accuracy score
10729, Basic logistic regression
21328, Garage
20136, Values in grayscale range from 0 to 255
25672, use the mean absolute error function to calculate the mean absolute error corresponding to the predictions for the validation set
30663, Cleaning and lemmatization
3412, look at FamilySize in more detail
40958, we need to check for the data type as we can only enter ints floats to our model
9287, New Features
35588, create the final input array with 42000 images each with size 75 75 3
37632, This early stopping implementation is based on the following three implementations that I found
1624, Linear Regression without regularization
25882, Histogram Plots of number of words per each class 0 or 1
13506, Train Score
37217, We can replace Quorans with Quora contributors
12099, Replace NA missing values by most often in column only for columns with 2 and less NA values where do not make sense to invest hugely into Analysis
4421, Make final prediction as combination of Ridge and Random Forest
11171, We use the scipy function boxcox1p which computes the Box Cox transformation of 1 x
10935, Dropping the columns with highest percentage of missing values
30138, TFBertModel
32051, We are all set
10307, Clean the data
35632, Our digits best friends aka Nearest Neighbors
40882, now we have OOF from base or 0 level models models and we can build level 1 model meta model We have 5 base models level 0 models so we expect to get 5 columns in S train and S test S train be our input feature to train our meta learner and then prediction be made on S test after we train our meta learner And this prediction on S test is actually the prediction for our test set X test Before we train our meta learner we can investigate S train and S test
17528, Extract titles from the Name property by using regular expression
6054, Basement
31648, MODEL 4 GRU
17671, Analysis
35378, Modelling
38552, Checking the bad word features
28272, Visualizing a digit from the training data as a 28 X 28 image
23480, Checking your predictions
12378, Bar Plot of all categorical data fields
43198, Create submit file
36974, Feature importance
2750, The two lines can be merged into a single line of code as
2385, Several ROC curves in a single plot new in sklearn
32635, Formatting for submission
24986, Removing numeric variables that are highly correlated
28095, Forming Model
10812, I am not sure
4417, Ridge Regression with Cross Validation
41042, In order for the notebook to run on Kaggle scripts we subsample the training data
22926, It looks like the bulk of titles are either Mr Mrs or Miss which are the standard titles and the rest are what I call special titles
33771, Normalization is performed on the Dataset to Scale the values within a Range
21759, Missing values in Antiguedad
9126, Alley
35415, Make Predictions with Validation data
9586, Scatter plot
4442, By simply adding a log transformation my place in competition jumed almost 2000 forward
19125, Feature interaction
10556, MDS Plot
37204, Averaged base models score
6407, many Null Values
16251, Training
19548, Bag of Words Countvectorizer Features
14742, There we have it
23652, lets quickly make EDA with just one line of code for that we have we have dabl which tries to help make supervised machine learning more accessible for beginners and reduce boiler plate for common tasks
8018, Survived
35187, Visibly two major clusters are there
15111, The Cabin feature itself as it stands now doesn t really provide all that useful information
15074, Name Title
36671, let s tokenize these messages
7139, Family Size
31077, Looking at the Kdeplot for GarageYrBlt and Description we find that data in this column is not spread enough so we can use mean of this column to fill its Missing Values
17703, PREDICTION ON TEST DATASET
28593, FireplaceQu
8414, with our best predictor we can cut only two outliers use it and substitute all others bath features with a existence indicator
12909, Check the shape of the datasets
11141, for r squared calculations we need to limit the comparison to only the train data since the test data does not have a Sales Price to compare to
24299, Reshaping and Scaling data
13748, Create Feature Engineering
15739, We can create new fature from NAME
43210, Using CNN
302, gotta encode all the object types
13158, Label Encoding AgeBin Class
7891, I explore the effects of the other featuers independently
12013, Desicion tree is a simple tree based models which divides the predictions data into several desicion boundaries based on some set of conditions and approximate the predictions on new data
4679, begin with the most correlated feature OverallQual
20680, Evaluate the Model
6755, Checking Skewness for feature 3SsnPorch
24147, Reduce the Problem Size
502, As indicated by the Kaggle information word reference both SibSp and Parch identify with going with family
38470, Download and Preprocess
2260, SibSp and Parch
19831, Gaussian Transformation
10767, LightGBM
24011, FNN
38472, Model Build Train Predict Submit
22765, look at the cleaned text once
36746, Concatenate daysBeforeEvent feature with our main dataframe dt
7286, GaussianNB Model
18768, This is a simple 3D CNN architecture for classification
38023, what are insincere topics where the network strongly believe to be sincere
1050, Log transform skewed numeric features
42561, Sigmoid function in plain numpy
15235, Prediction
26988, Run model
7864, The first step is the detect in which columns there are non valid values
1532, Looking at the Test Data
7676, Feature engineering
21536, Decision Tree
27977, violinplot
19911, Revenues data featuring
34176, Visualizing Layer 3
9260, Moving ahead to another variables
29904, Cross Validation
38987, Training Model
31713, Fatalities
22608, xgboost
25590, RandomForest
32542, check any missing value
6673, Random Forest Classification
30389, Train Test split
4693, Here skewness is our enemy since we gonna work with linear models
14290, Understanding Feature Importance
6789, Cabin
17899, Lets look at the Feature Importance plot
7849, I recently had to deal with multilabel data and had the same problem and to overcome this I made a fairly simple funtion for chain classification and you should also be able to create your own functions and classes
32787, Binary Encoding
13873, We fit and predict on the best SVC model that we derived based on the scores
12786, Compute Predictions
20099, Christmas flag
18975, Display more than one categorical variables distribution in a parallelized view
1864, XGBoost
3971, View Model Performance
23506, Add image augmentation
27598, Adding Hidden Layers
30971, since we have the best hyperparameters we can evaluate them on our test data
6338, We split them to
11787, Name Title
17800, And let s plot the Title grouped by the survived and not survived
36796, Even more you can do things like get the definition of a word
15577, Categorical features
26686, Term total credit annuity
24456, ABSTRACT from the Paper Edge Based Color Constancy
40976, Total Daily Revenue
15366, Dropping unnecessary columns from the dataset
37444, Tensorflow roberta model font
3999, We write a function that take pairs from the array with errors many times and at the end calculate the expectation
4233, Data Model Selection
15638, Select Columns of Interest
21149, We had some fun with LASSO penalty parameter adjustment
9352, Generate new input files for the training and test data with predicted age
11672, It looks like passengers that travelled in first class were more likely to survive
24843, Model Ensemble
18057, It s time for making our prediction
24140, Logistic Regression Model
28758, Residual Plots
13344, Creating and normalizing matrices for our model div
40059, use the last dev dataset to yield some insights about predictions and weaknesses of our model
7039, Type of alley access
4285, Sale Price Over Time Period
15816, from this we can inferred that the survival rate decreases with the class
37407, Since the LightGBM model does not need missing values to be imputed we can directly fit on the training data
2893, Below we have applied BoxCox transformation for each numerical features when we input the feature values to boxcox function it return lambda values which we use to transform each non gaussian distribution to gaussian distribution
29058, Template image
8224, As understood from the target column the box plot depicts some of the outliers
21627, Pandas datetime lot s of examples
30251, Convert the dataframes into XGB readable format
25814, let s add more penalty when our argmax prediction is far away from our target
32633, or to tune hyper parameters
30631, Relationship between wealth and survival rate
12903, We do a grid search to find the best parameters
296, Cabin Age and Embarked have missing values ranging from 0
9835, Random Forest
858, Of all passengers in df train how many survived how many died
36633, let s remove skewness using BoxCox transform
37521, Parch Survived
12783, Train Test split
19574, handle item price outliers
26281, Data preparation for Machine learning
36376, For the most part our labels look to be pretty evenly distributed
34960, Plots
9746, Fare
6394, Missing Data in percentage
2657, Convert all the data into numeric form
27156, Heating Type of heating
40395, Fold 2
7711, To reduce the skewness we ll take log of SalePrice
2746, If we find that most of the values of a column is missing we might want to go ahead and drop the column altogether
36449, Model generation
3287, Combine these models for final prediction on test set
41004, confirm our classifier block matches Pytorch s resnet implementation
37645, manager id
14762, Logistic Regression
18843, LDA Implementation via Sklearn
9166, Garage Cars
17385, we can compare the performance of the main model lin clf and the pseudo model as follows
23742, Random Forest Classifier
23532, Here I follow The idea is that some keywords with very high probability signal about disaster tweets
25451, CatdogNet 16
21094, Model performance
28227, Below we juxtapose the original input image with the correponding generate image from the neural network
36217, 259 missing values for LotFrontage we use SimpleImputer to fill them with averaged values
16456, We take log transform
16459, People travelling alone are likely to less survive
1811, Creating New Features
2347, CatBoost
8318, Performing cross validation of different models on 5 folds of training data
12044, For example let s look at scatter plot of SalePrice and Lot Frontage
797, Gini Impurity
35380, Data Preparation
1846, Significance of Discrete Numeric Features for SalePrice
26683, split categorical discrete and numerical features
17750, Converting categorical variable labels
32038, Since our dataset is not imbalanced we can use ROC curve to find the optimal threshold we compute the area under the ROC curve ROC AUC to get an idea about the skill of the classifier In case of highly imbalanced datasets it is better to use precision recall curves
8883, The 4th feature which we be adding is the Season feature
38640, Data Gathering
30575, To make sure the function worked as intended we should compare with the aggregated dataframe we constructed by hand
9090, This leads me to believe that I should have a column for houses that have 1945 Newer styles
37401, Admit and Correct Mistakes
42851, Data visualization
21064, Defining Model
40166, As mentioned before we have a strong positive correlation between the amount of Sales and Customers of a store
22637, Model 2 Mean Model
21530, look at the connections for the first 100 rows of positive responses
17637, Age grp Fare grp
35759, Base models
9220, Train the KNN
28741, Getting the best parameters
11135, lets use a log y scale
1, Based on the correlation heatmap in the EDA section its very clear that LotFrontage is correlated with LotArea and LotFrontage
10235, check one more time for missing values
33893, agregating POS CASH balance features into previous application dataset
15044, Name
30922, Real values of testdata
18696, create a ClassificationInterpretation object
42005, filtering by multi conditions in a column
33267, Set how is our prediction
17344, KNN
3493, Model 4 Gradient Boosted Tree Classifier
38524, We shall take the maximum length to be 150 since we shall be concatenating text and sentiment columns There is a very helpful function called encode plus provided in the Tokenizer class which can prove to be real handly It can seamlessly perform the following operations
15921, Model Training and Selection
23948, Converting all the categorical columns into numerical
19165, SVD on tf idf on unigrams for iten description
36578, On the other hand the age distribution is similar for both genders
13128, Survival by Pclass
94, Age and Survived
16537, combine the Parch and SibSp feature to form a better Family size feature
5304, Partitioning a dataset in training and test sets
15565, It is also natural to assume that Bowen Miss
19703, take a look at the distribution of the target value
26998, Some characters are unknown
10220, As there is no direct releation between Embarked and Survived variables we can drop this from our feature list
1834, LotFrontage NaN Values
12084, Distribution study
14640, make sure there are no null values left
18997, Make a submission
19265, LSTM for Time Series Forecasting
3981, Get target
28626, ExterCond
24546, Number of products by seniority group
31214, Target Column
28211, Understanding useful metrics
6686, Zoomed Heat Map
14650, Missing Ratio
2645, There are 177 NaN values in Age 686 NaN values in Cabin column
24246, Gender Sex
4246, We can optimize Scikit Learn hyperparameters such as the C parameter of SVC and the max depth of the RandomForestClassifier in three steps
2198, Applying the outliers to Age SibSp Parch and Fare columns
19593, item and city code
6191, Models without Hyperparameter Tuning
41209, len prediction len GT
8065, SalePrice per square foot
23910, Table for scores
12658, Validate with KFold
1709, There are a lot of missing values and some of the columns like Xylene and PM10 have more than 50 of the values missing
36047, Italy
36046, Classmethod and staticmethod
16724, SibSp
30077, We only used a subset of the validation set during training to save time
26880, Include only numerical columns and impute columns with missing values
977, Learning Curve
17366, we use the classifer that did the best job to make predictions on our test set
35165, Plot the model s performance
19127, Statistics
9244, Grouping the data
6416, Lets handle Skewness before moving to Bi Variate Analysis
2679, ANOVA for Regression
14602, Logistic Regression
7568, describe for categorical features
31063, Domain
18694, That s a pretty good fit
15242, Fare
1351, We can not create FareBand
43319, We have to impute the missing values
38816, Inference
13289, We tuning the hyperparameters of the LGBMClassifier model using the HyperOpt and 10 fold crossvalidation
11393, we have two missing values in Embarked and one in Fare
36732, Importing models and other required libraries
1613, One Hot Encoding the Categorical Features
40697, NOW RANDOM FORETS REGRESSOR GIVES THE LEAST RMSLE HENCE WE USE IT TO MAKE PREDICTIONS ON KAGGLE
14359, Crosstab and FactorPlot Survived vs Pclass
7301, Observation
1739, Notice that for Parch people travelling with 3 4 5 6 parents children aboard are very little
42172, We can obtain the number of axes and dimensions of the tensor train images from our previous example as follows
1659, Another categorical variable it looks a bit complicated
2744, Just as isnull returns the number of null values notnull helps in finding the number of non null values
18356, regression tree
1329, Correlating numerical and ordinal features
8737, Living Area
4416, Splitting Data back to Train test
18305, item cnt day features
37478, TF IDF stands for term frequency times inverse document frequency
15340, Making and Printing our predictions
39108, Sex
37711, Data Augmentation small important step small
16908, fillna Age Child
15776, ML models
10689, First visualization
5118, Data Preprocessing
25757, Those start looking really similar to each other
19465, Imports and useful functions
15291, Creating categories based on the Title of the passangers
11073, Models
26021, There are some variables that I came across are categorical values
7372, I concatenate all 3 tables and for convenience reset the index using the parameter ignore index True
43168, PCA
39740, Well that doesn t look promising
1702, Finding reason for missing data using a Heatmap
4820, Normality Assumption AS already mentioned we ought to make normality assumptions when dealing with regression
8233, I divided dataset into 70 30 ratio so that we can test our scenarios
24763, Kernel ridge regression
16635, Missing values
15655, MLP
1223, Encode categorial features can and should be replaced
40046, now chose a model structure
42110, Creating Prediction
41911, Making sure fnames and labels are in order
40052, If you like to search for optimal min and max learning rates just choose your values and set find lr True
33232, As the next step we pass an image to this model and identify the features
35764, Kernel Ridge Regression
39087, Searching for optimal threshold
16119, Decision Tree
2289, ROC AUC Curve
8621, GrLivArea vs SalePrice
514, Accuracy Comparison through Plots
25877, Word Frequency
18137, Loading data
16873, Sex vs Survival
37157, Predicting the activation layer feature maps using the img tensor below
28778, look at the distribution of Meta Features
24538, create income groups
26415, These distributions for the real fare per passenger now strongly correlate with Pclass and look more natural
35153, Plot the model s performance
2824, optimizing hyperparameters of a random forest with the grid search
2833, Model with plots and accuracy
19555, In order to avoid overfitting problem we need to expand artificially our handwritten digit dataset
14554, Embarked font
27967, Reducing memory size by 50
20146, find out which features are highly correlated to sale price
25000, Renaming the categorical columns in the format feature name category name
10856, Extracting the column names of the numerical and categorical features separately and filling the rest of the missing values
37319, First select the first layer filters parameter
35137, Is there an increase in promo if it is a School Holiday
12756, Scale our data
32059, This returns an array containing the F values of the variables and the p values corresponding to each F value
27080, Thus we have converted our initial small sample of headlines into a list of predicted topic categories where each category is characterised by its most frequent words The relative magnitudes of each of these categories can then be easily visualised though use of a bar chart
36681, Model Training
13557, Fare mean by Pclass
24372, Max floor
18413, Training Evaluation and Prediction
11940, We are trying to remove every Null value with the best possible alternative
7754, Log Transform
16519, No of female survivor is much more then the male survivor
18516, Check for null and missing values
12055, Linear Regression
30533, Exploration of Bureau Balance Data
11280, Collinearity can happen in other places too
40139, Kick off the training for the model
109, family size
34263, Normalize
35091, True Positives TP
20110, Item count mean by month city for 1 2 3 6 12 lag
21026, Modelling
546, Standard Scaler
15200, Family Size
7374, Before I start extracting surname and name codes note that in the Kaggle dataset the title of Mrs
37356, PClass vs Survived
1276, Feature Embarked 3 2 4
12651, to tune all the hyper parameters
19621, CNN MODEL
14707, OBJECTIVE 2 MACHINE LEARNING
31689, Plotting actual class with images
1725, Age Distribution
17711, Another one thing People with family size more than 7 didn t survive
12527, get started
19639, For now just drop duplicates
17020, Create cabin type feature
22669, Most Common Bigrams
33744, A Typical CNN structure one CNN layer
21796, Random Forest
20556, Model compilation
32102, How to make a python function that handles scalars to work on numpy arrays
10596, LightGBM
31065, File Format
36063, Predict Monthly
11304, Ensembling
32952, plot the distribution of difference between public score and private score
32571, Domain
11322, Sex
40438, Creating Callbacks
5076, How about log transforming the skewed distribution of SalePrice Will we get a better score
39258, This category covers roughly between and of all realisations in the dataset
33284, Family Survival Detector
4546, Concatinating train and test data
9218, K Nearest Neighbors based Model and Prediction
30992, First we test the cross validation score using the best model hyperparameter values from random search
15177, Update dataframe with Age
31611, RANDOM FOREST ALGORITHM
40007, Insights
410, Random Forest
29859, List out all the data elements containing the specified string
29921, We can do this for all of the hyperparameters
298, Getting the data
33157, Alternative explanation
37609, Filling Missing Values
3737, Will use a simple logistic regression that takes all the features in X and creates a regression line
9648, Looks good and well defined for different numbers of rooms except the one with 11 rooms
41784, CNN model With Batch normalization
4996, We note that the distribution is positively skewed to the right with a good number of outliers
2323, Generating a Confusion Matrix
35335, Compiling the Keras Model
6937, Visualize correlation coefficients to target
2301, Pandas Check for NA s in a column
8073, impute all incongruencies with the most likely value
34701, Averaging the extrapolated values
16976, Split them to train and test subsets
18077, The maximum area of bounding box
16245, Submitting the predictions
31999, that our model building is done it might be a good idea to clean up some memory before we go to the next step
33285, Title Extractor
570, VotingClassifier
2112, It looks we found something the bigger the house the more it costs but the bigger the rooms the less the house is expensive
8433, Lot Frontage Check and Fill Nulls
42553, ROC
7830, Jointplots
33797, By itself the distribution of age does not tell us much other than that there are no outliers as all the ages are reasonable
33073, Submission
40646, Make Data Model Ready
23572, It looks like our model predicts reasonably well
14877, We can also use the x bin argument to clean up this figure and grab the data and bin it by age with a std attached
916, Deviate from the normal distribution
24475, One hot encode the label values digits from 0 9
8064, MSZoning
12885, Gender and Embarked
23368, Some of the Correctly Predicted Classes
12075, Avoiding Multicollinearity
14767, Our Logistic Regression is effective and easy to interpret but there are other ML techniques which could provide a more accurate prediction
26667, previous application
21816, Together
4185, Imputing missing values
18753, We can also rename a column as follows if we wish to
2470, Adding Family Size
30864, Reshape
233, Libraries and Data
38020, Finish
32397, Here we make predictions using the model we trained
9750, I decide to fill missing data in Age and Fare by median value
26045, We can also create a small function to calculate the number of trainable parameters in our model in case all of our parameters are trainable
22177, do some cross validation to check the scores
41268, Diverging Colormap
29383, ol DATA PREPROCESSING span nbsp nbsp
15172, This is our accuracy score
23212, Evaluating Different Ensembles
560, Random Forest
2461, SelectPercentile
12437, First pipeline with xgboost
32348, Treating text lowers the length of texts and therefore allows us to make a model with less parameters and a shorter training time
13495, Embarked
691, Training the model and adjusting precision
20570, Family Size denotes the number of people in a passenger s family
4530, NaN 5 Pandas fillna NaN NaN Age Fare
8377, Drop em up
41418, Split train validation set
14834, Ensemble Modeling
24521, I check customer distribution by country
5252, Embedded Feature Selection Selecting Features From a Model
632, Family
33840, Bar Chart
7570, Barchart NaN in test and train
27830, Check nuls and missing values
11295, One Hot Encoding for categorical variables with no order
7810, Evaluate Bayesian Ridge Model
11500, Transforming some numerical variables to categorical
23746, Tuning HyperParameters
14918, Ticket
36059, Feature Hour
28674, Utilities
37906, Light GBM Regressor
37705, Looks like numbers but a little different than before because here we used much much more data
21671, Catboost
16328, SibSp Parch Feature
8667, we can get rid of empty values in cols
28377, Prepare for Submission
16931, about half of the passengers are on the third class
16954, Age
26091, Adding Callbacks
19219, Naturally we have more data in the first data set because it s possible no changes in a product
34821, Deep Convolutional Neural Network is a network of artificial neural networks
31903, Build the model
25189, visualize how are images stored in matrix form
16706, try to create a fare band
5015, De duplication
2308, Pandas how to add columns together numeric
34693, Mean item price over all shops and months and current deviation from its value
36669, Woah 910 characters let s use masking to find this message
8348, Survival by Sex and Age
1549, We then load the data which we have downloaded from the Kaggle website is a link to the data if you need it
23342, Modeling
14886, Data Visualization and Missing Values
19286, Write output to CSV file
12600, Random Forest using grid search
17349, Voting Hard Soft
18586, We re going to use a pre trained model that is a model created by some one else to solve a different problem Instead of building a model from scratch to solve a similar problem we ll use a model trained on ImageNet million images and classes as a starting point The model is a Convolutional Neural Network CNN a type of Neural Network that builds state of the art models for computer vision We ll be learning all about CNNs during this course
17399, Title wise Survival probability
38590, Splitting the data into training and validation sets
18909, Cabin Feature
18580, Here we import the libraries we need
36436, Check out where my output files reside
15672, Gradient Boosting
43253, Importando arquivos
7898, When dropping the colmuns in the train dataset it would be neccesary to do the same in the test dataset
37377, Very well
33864, Backup data
8606, Lets move on to checking skewness found in the feature variables
30637, We should replace rare titles with the more common ones and create a category for high status titles
15905, EDA Relationships between features and survival
11704, Load Data
27320, Creating more custom images by using Data Augmentation
20084, Top Sales Item Category
1041, Encoding categorical features
19369, Frequency distribution of attributes
7277, Title Feature
40294, Start our CNN model
11895, In here I use log function to process the target SalePrice
25896, Topic Modelling
31429, Checking Formula
21857, Normalization
35168, Compile 10 times and get statistics
38202, Submitting the Test Predictions
33654, Correlation matrix
41238, Random Forest classifier for non text data
7357, EDA and preprocessing
15708, Correlation matrix
12391, Testing set
955, Creating NumPy arrays out of our train and test sets
4525, Trying XGBoost
5933, Finding skewed columns
41049, Days Prior Order
29336, With the PCA variables
2335, Train Test Split
41006, Creating Resnet34 model
3034, Submission
8373, KNeighbours
39309, TRAINING FOR PREDICTION
28486, We can reduce memory for columns only having type int and type float or columns having numeric values
31392, We can assign each embarked value to a numberical value for training later
2021, Here we average ENet GBoost KRR and lasso
18723, let s export our model as a pickle
16534, let s plot and analyse how the Passenger Class affects survival chances of a person
3508, Construct the best model fit it make predictions on the training set and produce a confusion matrix
15210, Feature selection and preprocessing
18423, save the oof predictions here as well
40257, Overall Quality
16024, Actually we can construct useful feature from Name and Ticket
24142, Multinomial Naive Bayes
36900, Investigating false predictions
990, gotta tune it
12556, Decision Tree
2089, Before getting insights from the data let s take the final step of the instructions that came with the data and have a general cleaning
1962, now every thing almost ready only one step we converted the catergical features in numerical by using dummy variable
6729, OverallQual Vs SalePrice
20907, Create checkpoint callback
8425, Identify the Most Common Electrical
39006, Function to generate random minibatches of X and Y in synch
5041, LotArea looks relevant and is one of the most skewed features
10782, Uniting the pipelines
9709, Feature selection using Lasso This is done just for demonstaration purpose
6014, Automl yang ada di Jcopml hanya perlu memisahkan data numeric dan categoric
29847, drop the coulmns that have more than 20 missing values
31780, Convert categorical features
18595, Since we ve got a pretty good model at this point we might want to save it so we can load it again later without training it from scratch
4698, Among the missing features there is one that is difficult to manage
30579, The sum column records the counts and the mean column records the normalized count
14847, Name
13040, nd approach to treat the Age feature
10991, Below parameters are set after grid search
11765, Our score improved to
16939, the most important features are if you are a man or not how much you paid for your ticket and your family size
8833, Model Creation
10098, check the null values fill it
11064, Filling missing values in both the test and the train data from those calculated from the training data
5011, Categorical Features
7055, Miscellaneous feature not covered in other categories
37366, we can say that fare is correlated with Passenger class
20649, Multi Channel n gram CNN Model
34386, Seasons
56, Maximum Voting ensemble and Submission
2003, That looks much better
32231, The special thing about TesnorFlow is that you can define operations and computations as variables
1280, Features SibSip Parch 3 2 5
10602, Step 4 Define Model Parameters
23420, Before we begin with anything else let s check the class distribution
753, Obtain the Latent Representations
41518, Output data for Submission
37613, Creating features
2483, How many Survived
20599, Walk Through Each Feature One by One
8614, Even after doing this we still have so many feautures which are filled with null values and luckily their missing percentage is low
25207, Analyze Overall Quality OverallQual highest correlation with SalePrice
5145, Separate Dataset into Train and Test
29050, A value make the image darker while values makes the image brighter
18095, For EDA on image datasets I think one should at least examine the label distribution the images before preprocessing and the images after preprocessing
3677, Creating new features
27228, Extract the intermediate output from CNN
41917, Improving our model
24024, Some conclusions that we may make from the plot below
1693, Listing unique values in categorical columns UniqueValues
13900, Insights from graph
780, Models include
40710, Preparing Training and Validation data
9981, plotting catogrical values with repect to their count
33024, CNN
4601, Pool
5880, ANOVA test
29744, Submission
8378, Add non AI knowledge
4870, Combining all area features to a single feature
41055, Cross Validation Scores for the different data sets using Gradient Boosting Regressor
11390, Cleaning and imputation
18924, Creating Submission File
36700, Model Summary
26023, Practically the larger the area the more the price of the property is As in our data Garage and Area related variables have significant relationship with SalePrice Target Variable I came to know that it would be good to make a feature engineering on that attribute
10037, Look thier is 1 captain 1 Don 2 Col 2 Major 7 Doctor in data set
28241, Ensemble modeling is a process where multiple diverse models are created to predict an outcome either by using many different modeling algorithms or using different training data sets
37572, ID variable
19152, Check the distribution of TARGET variable
4099, Age is not integer
10068, Amount of survived small families are considerbaly higher with respect to large families and singles
10626, Missed manipulation of columns code SibSp code code Parch code
3715, BsmtCond BsmtQual FirePlaceQu GarageType GarageCond GarageFinish GarageQual
33607, since we are doing multi class classification we have to one hot encode our labels to be able to pass it through our model
4526, Using XGBoost for stacking also
6079, Name
7285, Gradient Boosting
23260, Bivariate Analysis
5208, We can check the score of the model on the x test before predicting the output of the test set
34606, Fit the XGBoost model
26315, Regression Evaluation Metrics
35357, Creating Sample Submission File
38983, Build and evaluate Neural Network
36412, Separation of Continous numerical features and discrete numerical features
26217, Recall we are using our chosen image as example for convenience I remind you of the chosen images image matrix and its bounding boxes coordinates below But there is a caveat here my bounding boxes array is of shape N 5 where the last element is the labels But when you want to use Albumentations to plot bounding boxes it takes in bboxes in the format of pascal voc which is x min y min x max y max it also takes in label fields which are the labels for each bounding box we still need to do some simple preprocessing below
20276, class weight 0
7547, KNN
24284, Producing the submission file for Kaggle
15261, Observation Female have higher chances of Survival
8550, Other Categorical Features
35908, Visualizing Test Set
40090, Regressors with Dimensionality Reduction
21634, See all the columns of a big df
15327, Lets convert our categorical data to numeric form
8011, try a gradient boost with a weak random forest also we try to find number of estimator with early stoping
600, Interesting
1170, look at the basements
39099, Linsear Write Formula
27119, We only have half of Fireplace quality data
1237, Defining folds and score functions
27011, F1 Scores
29626, Data preprocessing steps is very exhaustive and consumed a lot of time
30844, Crime Distribution over the Month
11440, Ridge RMSE Min Alpha
19294, Model DenseNet 121
33860, Fitting a random model
19969, Interesting
23464, Splitting data into train and test sets
944, Model Data
98, This facet grid unveils a couple of interesting insights
8893, Lasso LassoCV
19001, Model fitting with HyperParameter optimisation
30101, Model Train
28877, Multi Dimensional Sliding Window
18205, Train Schedule font
1341, create Age bands and determine correlations with Survived
27131, First step we separate discrete features
7698, REGULARIZATION RECAP
40739, Baseline Model
12335, Fence
23575, Define the model for hypertuning
11498, Check other features
13330, let s store in 2 new columns the information about babies
15434, let s have a look at probabilistic Gaussian Naive Ba classifier performance
41599, Train the model
7021, Rating of basement finished area
429, Alley data description says NA means no alley access
18082, let s plot the images with large areas covered by bounding boxes
24465, Original
34937, Check it
31844, We think that category and are same as
29187, Random Forest
5147, For numerical variables we are going to add an additional variable capturing the missing information and then replace the missing information in the original variable by the mode or most frequent value
17373, Embarked Q
8235, Basic Bivariate Visualization code
14403, Filling the missing Fare value in test set based on mean fare for that Pclass
5598, OverallQual causes different SalePrice where having same GrLivArea We have to put a strong attention
39097, Automated Readability Index
42881, Containment Zones in India
20106, Item count mean by month main item category for 1 lag
34856, Automatic feature selection
26477, Add ons Pytorch Implementation for Resnet Finetuning
9336, We now have a sparse matrix of the dimensions of number of rows times the number of unique Neighboorhoods which is very close to what we obtain by using get dummies and we have a loss in interpretability given the fact that the matrix looks like this
38660, Count of Parents and Children
27319, Hyperparameter tuning using keras tuner
23519, Cut tails of longest tweets
9634, Dropping ID from the dataset
8344, We train a linear regressor for this new training matrix and predict our target We use Lasso GLM to avoid overfitting
18263, Apply model to test set and output predictions
41963, Submission
8814, DEALING WITH MISSING VALUES
14095, Decision Tree Output File
16679, Data Wrangling
40670, Tri gram Plots
889, Decision Tree Classifier
18302, one transaction with pric 0
2434, for the Ridge regression we get a rmsle of about 0
7367, I also add the column Class to the DataFrame wiki1 and assign it to 1
33882, converting categorical features to numeric by frequencies
10024, The procedure for the training part may be described as follows
36287, PCA Principle component analysis
15184, Learning Curves
16786, Simple Pipeline do it in a different way
17284, You can slice the list of columns like a usual python list object
35945, XGBoost
33259, Show Image
40051, Doing the resize preprocessing in advance is definitely speeding up the computation If you are using more images than the original data you should consider to do so as well
40318, Analyse Tweet Entities
35385, RANDOM FOREST
34408, Processing data for saving
11104, Identify Features Highly correlated with target
29528, PassiveAggressiveClassifier
7049, Exterior covering on house
4551, FireplaceQu data description says NA means no fireplace
5591, Checking the correlation between features
26888, Score for A4 17688
39783, Logerror vs nearest neighbors logerrors
31545, since None is maximum so replacing with the maximum one
32828, build a mdel building function in which we input the layer bert layer and get the model as an output
10228, start with moving target and feature variables
41997, Sorting columns w ascending order
12339, All 157 NAs in GarageType are NA in GarageCondition GarageQuality and GarageFinish
14356, Distribution of Classes of Survived passengers
43014, RandomForest
32896, We have to do the similar score calculation for words in Question2
32063, decompose the dataset using Factor Analysis
20748, as skew value imporver after regularization so we do log operation
36727, Train the network
9438, Correlation Heatmap
39052, Name
4714, let s handle the missing values
26868, One important thing to do here visualize some input images
42057, Making a function for drawing graphs
2314, manually use dataframe methods to split X and y
8678, Check for Missing Values
11650, Gradient Boost
27770, Improve the performance
2810, Bar Chart
17403, Model Comparison
4179, Both Age and Fare contain outliers
27215, because of class imbalance it s better to use focal loss rather than normal binary crossentropy You can read more about it here
36300, Random Forest
34476, Neural Network
15967, Age Feature
45, Ticket column
6369, ENSEMBLE METHODS
6492, Transforming skewered data and dummifying categorical
11296, Modelling
20218, Model Architecture
14353, Number of Survived and NonSurvived with Gender
17695, FEATURE SELECTED
29767, We perform the same operation using the optimal model
29843, clean and standerize the numerical data
26435, This way we can easily extract the feature names after the encoding
33835, Mode For categorical feature we can select to fill in the missing values with the most common value mode as illustrated below
39244, Drop irrelevant categories
8888, One Hot Encoding
3911, LotFrontage we can fill in missing values by the median LotFrontage of the neighborhood
39032, Train
38491, Submission
9171, TotalBsmtSF
17719, To limit the number of categories in Fam membs features it be divided into 4 categories as following
21077, Target varaiable
8025, Let replace missing value with a variable X which means it s Unknown
5519, XGB Classifier
13166, start loading the required librarys that we use in this kernel
640, so we have 18 different titles but many of them only apply to a handful of people
26736, Plotting sales over the months
30644, Logistic Regression
3847, option4 replace age with title of name
40469, Output new files
2987, Decision Tree Regression
7555, Bagging And Pasting
117, As I promised before we are going to use Random forest regressor in this section to predict the missing age values
19722, Store CA 1
22363, In this case three duplicated values are in Public LB subset
39771, As I suspected questions are indeed sorted by ascending question id in our train dataset
2225, Building Characteristics
38556, Imputing missing values
31112, there is no obvious seasonal periodic characteristics
26440, To get an overview of the model structure we can plot the specifications and the model architecture
21379, NB
31774, Prediction
37629, Here we display some images and their transformations
14413, Scaling the Data
8471, Feature Selection into the Pipeline
40702, Training and Evaluating Our Model
17252, Voting Hard Soft
33149, Predict
35657, Lets isolate both the numerical and categorical columns since we be applying different visualization techniques on them
1178, No incongruencies here either
4230, Data Load
25179, After we find TF IDF scores we convert each question to a weighted average of word2vec vectors by these scores
6265, This confirms that Southampton had the most number of 3rd class passengers but also had the most number of first class passengers
18295, Checking our synonyms
3798, For the categorical features without what NA means fill the NA with the mode
37640, Adding variables from bedrooms
42251, Import machine learning modules
7699, define some helper functions which would be used repeatedly
31402, Train Model Submit
2181, Box Cox transformations aim to normalize variables
42096, Plotting labels and checking their frequency
38688, Sex
9743, Age
19457, The final layer of 10 neurons in fully connected to the previous 512 node layer
2211, Light GBM
6850, Ordinal Features
30339, This is a routine to evaluate the losses using diferent learning rates
19214, Sorting
27052, Preprocessing DIOCOM files
10783, Gridsearch and Crossvalidation
8793, To explore the data we start with
20043, Load Data
1088, Statistical summaries and visualisations
3857, Drop and Reordered columns
21407, Training
16902, New Feature NameLenBin
27318, Creating a simple model without any hyperparameter tuning
28107, Find correlation and combine the highly co related columns
11850, Final Filling of Missing Data tying up loose ends
18249, Dates Transactions
20834, It is common when working with time series data to extract data that explains relationships across rows as opposed to columns e
41273, Text Annotate Patch
16026, Around 74 female survived but male just 18
22438, Counts Plot
24846, Still need to complete part of the data for dates past the 25th of March as the enriched dataset didn t go that far
34282, Cnt Along Time
13059, Importing tools data and cleaning up the data
41962, Predictions
14442, Create New Column AgeBand
25004, Item Item Category Data
24150, we calculate the mean hour for each product
12148, Creating the submission file 3
22103, Visualiza some Test Images and their Predicted Labels
12107, XGBoost
20222, Submission
14909, People from Pclass 1 and Sex female are more likely to from Embarked S or C
2122, The parameter space I want to explore searching for the best configuration is
38416, Don t forget to change the model prediction from a class probability array to class value
12631, Feature Distributions
37667, Callbacks
4973, IsChild
42029, For example PC17599 PC and 113803 113083
18067, Saving the Models
11103, View sample data
11140, Then add this column from previous investigation to the dataset y curve fit gla 2 curve fit gla 1 x data curve fit gla 0 x data2
19546, Stemming and Lemmatization
20715, LotShape column
35461, Visualize the skin cancer at Upper extremity
29903, Compile network
9353, Wrangle prepare cleanse the titanic data manually
15136, Engineering
22466, Tree map
10319, Make individual model predictions
33497, Italy
30603, We can remove these columns from both the training and the testing datasets
3832, box whisker plot using plot method in pandas
4403, Bulding the Random Forest Model
41207, len prediction len GT
26337, The best score is when we use MNB on the bag of words vectors
21364, Using polarity for pretraining TRANSFER LEARNING
39073, Feature Importance
40689, now most importantly split the date and time as the time of day is expected to effect the no of bikes for eg at office hours like early mornning or evening one would expect a greater demand of rental bikes
4436, Prediction
27183, Creat Submission file
28707, Submission
21640, 3 ways of renaming columns names
38273, here we are actually padding that means if the sentence is not long enough we are just filling it with zeros
36409, look at filling in some of the tax related variables
31528, Explotory Data Analysis EDA
21786, Var columns
6531, Visualize model scores
33466, StateHoliday
31640, PdDistrict
8345, we submit our prediction
38563, K Nearest Neighbors Regressor
450, Getting dummy categorical features
3672, Visually comparing data to sale prices
17687, WE WILL USE THE MEDIAN VALUE
12944, Categorical Encoding
31674, Understanding how to process data for our aim
3869, what happened
38301, Lets now prepare and clean our training dataset
7831, Rolling window estimates
29361, Fare Feature
35420, Plotting an image from each class to get insight on image data
34865, I use the following function to track our training vocabulary which goes through all our text and counts the occurance of the contained words
38092, FITTING THE MODEL
27575, ps ind 03 x ps ind 161718
3881, We are at the final step of applying Regressors to predict the SalePrice with our pipeline
21647, Read and write to a compressed file csv zip
33661, Put submission to csv file
7748, There are some categorical features which their categories are actually ordinal so it can be a good idea to convert them to numerical features
33506, Albania
15852, Ticket Family Survival Rate
32483, Re run model using all training data for optimal mixing parameter
26220, Horizontal Flip
18264, Predictions class distribution
24057, We ll build a CatBoost model and find best features with SHAP Values
22066, Which words in our data are NOT in a vocab
3760, Linear Regression
2465, Forward Selection
40281, define helper function for image augmentation using albumentations library
19011, Experiment 1
26877, Create Submission File for approach 1
39998, At this stage I compare two data set according to the age
15999, Creating the Submission File
41657, Bivariate analysis of numerical features against target
7747, Id feature can be deleted as it doesn t give us any information and it s not needed
33243, the training cycle is repeated lr find freeze except last layer
38621, Splitting Up Train and Validation Set
8886, Skewness Check in all the columns
18501, Simple enough
31352, The report can also be exported into an interactive HTML file with the following code
10855, Filling certain categorical columns about which we have an intuition using the forward fill method
32366, A General Look With Sunburst Chart
18035, Men and Boy
16827, Feature Selection Using RFE
29855, We need to perform tokenization the processing of segmenting text into sentences of words In the process we throw away punctuation and extra symbols too The benefit of tokenization is that it gets the text into a format that is easier to convert to raw numbers which can actually be used for processing
26467, In this section we try to implement a CNN architecture from scratch using the Keras library
33202, Evaluate Model on Test Data
2049, LogisiticRegression Model
27016, Training
6297, Model selection
18490, Since the competition variables CompetitionOpenSinceYear and CompeitionOpenSinceMonth have the same underlying meaning merging them into one variable that we call CompetitionOpenSince makes easier for the algorithm to understand the pattern and creates less branches and thus complex trees
55, AdaBoost
41301, How many categorical predictors are there
14154, let s take a look at the Cabin and Ticket features
10640, Rarely Occuring Title Respective Gender
12083, Correlation study
43378, the label holds the true digit and the other columns all 784 pixel of an image with 28 times 28 pixels
9213, Family Survival
19145, Model 4 with AdamOptimizer
11907, Embarked
28482, The standard deviation and the maximum value have dropped by quite a large margin
14539, We are given the train and test data
24351, Set the optimizer and annealer
15668, Logistic Regression
2766, Feature selection
11542, As a first step we start by importing all the libraries that we shall use in our subsequent analysis
1931, Ground Living Area w
29602, Training The Model
12187, Submission file
21121, Define variables categories
18188, Lets clean up product names a bit we have 1000 unique names once we cleaned the weights but there is much more work to be done
7373, Preparing Kaggle dataset for merge
3240, Importing and Understanding the Data Set
12401, Converting ordered categorical fields to numbers
6909, And the next 3 plots are just cool kdeplots for Gender and Survival Rate
31097, Data cleaning for test data done
17273, Gaussian Naive Bayes
17890, Lets visualize the new variable Title
26263, Processing of the test file
6897, Most of the passengers are between 15 and 40 years old and many infants had survived
40725, Further Traning
795, You can easily compare features and their relationship with the class by grouping them and calculating some basic statistics for each group
15152, Age vs Survival
9252, A Few ML Models
31046, Caps Words
11677, It looks like those who survived either paid quite a bit for their ticket or they were young
21061, ROC Curve for healthy vs sick
34611, PCA
12669, Visualization
27553, Display interactive filter based on click over dependent chart
11130, Safe to drop the Id column now as we are finished deleting points otherwise our Train ID list be smaller than y train
28516, GarageCars
22370, Feature Selection
40825, t SNE using Scikit Learn
35799, Find best cutoff and adjustment at low end
19826, Rare Label encoding
6010, Masukkan ke pipa Preprocessor
24499, Generating the model itself using the defined class
30094, To enable CV
2748, Using mode returns a series so to avoid errors we should use
15524, Combine train and test dataset
32112, How to import a dataset with numbers and texts keeping the text intact in python numpy
32320, Check Accuracy
28336, Analysis Based on FAMILY STATUS
41934, Generator Network
25287, 7 accuracy not bad at all
28811, Create Our Model
20036, The next step is to split the data into training and testing parts since we would like to test our accuracy of our model at the end
27173, Splitting the data in train and test datasets for model prediction
13468, Exploration of Embarked Port
4301, Inference
3484, Model 3 Random Forest
25846, Hashtags Cloud
20383, explore corpus and discover the difference between raw and clean text data
3371, In order to actually move my local files to GCS I ve copied over a few helper functions from another helpful tutorial on moving Kaggle data to GCS
1180, Looking at the min and max of each variable there are some errors in the data
6696, Lets take some examples
16105, we map Age according to AgeBand
34919, Count exclamatory in tweets
11995, make predictions on test data for linear reg
27452, Remove Punctuation
4158, Count and Frequency Encoding
18351, ENCODING
40730, Lets understand the intermediate layers of the model
8545, LotFrontage
37060, Ticket variable contains alphanumeric only numbers and character type variables
29915, The random search slope is basically zero
15203, Neural Network
16406, Male is Less likely to survive than female
33697, Monthly Series font
18892, Modeling
32225, First we need to define the label and feature
4865, SalePrice is skewed and it needs to be normally distributed
3806, Gradient Boost Regressor
6398, Data is skewed and dense at bottom checking for skewness and kurtosis
8546, Garage Features
38956, And then unpack the bbox coordinates into seperate columns x y w h
11017, we are gonna write a code which iterate over Sex and Pclass and fill the null matrix
12460, Skewness in other variables
5549, Build Keras Model
27979, Binary Classification
24587, Predict
25225, Modeling
22707, Prediction on Test Set
34966, Data Wrangling Feature Engineering
8945, Fixing Masonry
8353, Survival rate regarding the family members
276, Age
10848, Box Cox Transformation on Skewed Features
30576, Correlation Function
14978, Filling Cabin missing values of training set
28708, Randomize the samples from TRAIN DIR and TEST DIR
7073, It s a passenger from Pclass 3 so we ll fill the missing value with the median fare of Pclass 3
23962, Correlation Analysis
25765, Pclass
10482, Imports
26381, start by defining some parameters
39752, And again checking poly scaled features making sure to scale AFTER you add polynomial features
35882, Reshaping
28539, is time for modeling
30424, Sample sentiment prediction
41051, Customer Order Count
35719, looks like we have 2 MasVnrArea and BsmtFinSF1 so use those for this test
13968, SibSp Siblings Spouse
17881, Spliting the train data
8082, Defining model scoring function
16383, Combining Pclass and Embarked
41732, Setting up validation and training dataset
9418, The model Cross validation
8918, The function plot missing identifies the percentage of missing rows for every feature in our dataset
2927, ExtraTrees
41930, let s create a dataloader to load the images in batches
23842, Taking X10 and X29
14463, back to Evaluate the Model model eval
7980, Its hard to select from them by eye
36491, Attention model in Keras
6802, Imports and useful functions
13342, Another piece of information is the ticket number
1726, Correlation Heatmap
7937, XGBoost Regressor
6950, Embarked
1893, Validation Data Set
32380, Correlations Between Features
26794, Visualize some examples from the dataset
13029, Fare
42979, now construct a few features like
16959, We have a clear fare distribution between 0 and 150
16087, Below we just find out how many males and females are there in each Pclass
11196, xgb reference
41581, Using a Learning Rate Annealer the Summary
803, Imports
38022, First let us define these functions which do the jobs
34342, Feature Selection and Engineering
3179, Sklearn Models
1388, Validation
6098, Transforming and Engineering Features
6753, Checking Skewness for feature LotArea
42787, Target Distribution
18607, ConvLearner
725, As aforementioned if we want to look at more traditional regression techniques we need to address the skewness that exists in our data
29628, Dataset after transformation
38017, Another handy feature is analyzing individual predictions
15258, One Record is missing for Fare Feature in test dataset
1629, Importing my functions
18222, Evaluation Functions
15713, Create one feature for Family Members on board of Titanic
9172, This relationship isn t looks almost linear
15797, There is much difference for 1st and 2nd Embarkation for 1st and 3rd Pclass in terms of fare for males and females while the 2nd class fare is similar in all the Embarkations
15209, calculate average survivability for each left categorical fields
11623, Split train and test dataset
6383, Histograms are used to visualize interval and ratio data
6520, Numeric Variables
26915, It is often considered as if there is more than 15 of missing data then the feature should be deleted
28468, Remaining columns
12499, Train Untuned XGBoost
35921, View an example of features
35675, Based on the current feature we have the first additional featuire we can add would be TotalLot which sums up both the LotFrontage and LotArea to identify the total area of land available as lot We can also calculate the total number of surface area of the house TotalSF by adding the area from basement and 2nd floor TotalBath can also be used to tell us in total how many bathrooms are there in the house We can also add all the different types of porches around the house and generalise into a total porch area TotalPorch
7758, Another pipeline for categorical features which first use an imputer to replace missing values of features which the most frequent value and then applies a label binarizer
976, tune it
4912, Analyzing the Categorical Predictors
38982, Keep only relevant numerical features and normalize
19158, Undersampling
2420, It appears that the target SalePrice is very skewed and a transformation like a logarithm would make it more normally distributed
19914, Feature Importance
32120, How to find the position of missing values in numpy array
29531, displaying the matrix of a single image defines the first instance of the data
15546, dropping Cabin column
2768, Interesting insights
10254, Go to Contents Menu
2579, Libraries and Data
6606, XGBOOSt
26018, Dealing with outliers requires knowledge about the outlier the dataset and possibly domain knowledge
535, Feature Engineering
24825, change grey value from int to float
39145, Data Wrangling
14475, 18 of male survived and 74 percent female survived
27554, Display interactive highlighter of time series
36835, Define calculations we need from the Neural Network
17830, also plot the classification report for the validation set
12421, After training our model we have to think about our input
42990, Word2Vec
10780, Object Columns
24101, Finding the columns contains nan value
37117, We get a correlation coefficient of 0
8138, Imputing Real Null Values
203, RANSAC Regressor
5962, Age and Survived
5337, Diplay series with high low open and close points with custom text
1116, 20 of entries for passenger age are missing
14409, We dont need this feature to predict the survival of a pasenger
41812, Inferance
6912, Check for missing data
15365, Handling Embarked Feature
887, Gaussian Naive Bayes
41077, Encoding the data and train test split
1932, SalePrice per square foot
41279, MEME xkcd theme
24996, Generating final training validation and test sets
9277, Lasso Regr Model
14513, Observations
37181, Submission
28427, Shop info
15752, Clean Data
4093, Last but not the least dummy variables
39691, Spelling Correction
7464, Making predictions and measuring accuracy
9032, In one hot encoding we made values of categorical nominal features their own row
8074, Zoning is also interesting
36752, Submission File Creation
12784, Training the model
14226, Binning
26341, Modelling
3489, Get the in sample and estimated out of sample accuracies
11041, Create the pipeline
29881, With KFold
11826, look at the data after dropping variables
43358, X and y
38580, There are more missing values in the keyword and location so we can drop it
13987, Age is not correlated with Sex and Fare
4990, Learning Logistic Regression
41175, update i changed maxpooling layer and dropouts with multisample dropouts
24523, How about employee index
32968, Embarked
8375, XGBoost
1761, Passenger ID
4316, Fare
34608, using the XGB models
38203, Prepare Data
6386, calculate and plot Pearson correlation coefficient using numpy for columns in the dataset and plot them using seaborn heatmap
19098, Building Machine Learning Models Train Data
13126, Visual of empty values
13702, We now use one hot encoding on the deck column
32128, How to create a new column from existing columns of a numpy array
19545, Removal of Stopwords
9425, Scatter Plot
28783, Conclusion Of EDA
6290, Support Vector Machines
27036, The number of unique patients is less than the total number of patients
13529, Encoding categorical features
34531, To test whether this function works as intended we can compare the most recent variable of CREDIT TYPE ordered by two different dates
36853, plot history loss and acc
10778, good enough for me I ll wrap this up and make the predictions out of it
6562, Parch
43117, handle missing values in X test
23585, More insights
32533, Train the Model
27400, Load images into the data generator
6783, Feature Sex
7030, Garage Quality
37900, XG Boost Regressor
3783, XGBoost
40316, Get only those lemmas with 2 merged candidates
5719, Getting dummy categorical features
11387, The passenger class is somewhat evenly distributed in terms of survivors but of those that perished odds were they were in the 3rd class
31407, Lets take a look of our data notice that the grey img now is turned to RGB with size
4532, Titanic 1997 film 29Historical characters
34628, Wikipedia Definition
14101, center Histogram center
16591, Checking best features
9504, Librarires using for Machine Learning Algorithm
32245, let s split out training and testing data
20351, we unfreeze the pretrained layers and look for a learning rate slice to apply In Lesson Call lr find again Look to just before it shoots upe and go back x which is e and that s what I do for the first half of my slice and for the second half of my slice I normally use whatever learning rate I used for the first part divided by or
6558, Age
1328, Analyze by visualizing data
31122, normalize and impute missing values
37992, Predict on test
4123, Machine Learning
8658, Instructions
14985, Machine Learning with different algorithms
13228, Deep Learning Model
33039, That didn t change it much
15551, Training data
10280, Random forest full dataset
6235, Stacking of base models
13523, If we want to it s also possible to check the feature importances of the best model in case they re easy to understand and explain
7721, Checking for NaN values in Data
32211, Add lag values for item cnt month for month shop item
42307, Probability Dataframe
25213, find the percentage of missing values for the features
38720, we finished training the GAN
28997, Dealing with Outliers
248, Library and Data
16512, Logistic Regression
15879, Leaf size
30662, For somebody storms and open wounds are the real disasters
7866, To start the exploration it is possible to group passanger by Sex and Class these groups could give insights if higher class have better chance of survive or woman have better chance than men for example
14537, Random Forest Classifier
15882, Output Final Predictions
4769, Replacing the fare column in the test dataset with the median value of the column
4109, Transform the target variable if skewness is greater than 1 or smaller than 1
14116, As expected the survival rates are higher if they are with family
37807, Polynomial Regression
18453, strong Natural Language Processing strong font div
42257, Make predictions for submission
13410, Precision
16271, Execute
35576, Looking at the same data a different way we can plot a rolling 7 day total demand count by store
7594, Rotating the 3d view reveals that
32200, Clean item names
5537, Final Missing Data
9385, BsmtFinSF1 Type 1 finished square feet
19346, Prediction
41003, Classifier block
31708, Stacking Blending
11485, Electrical
17486, Stacking Ensemble
3194, Full imputation
13182, First let s take a look into Age distribution
38951, For this particular problem we can just use one time period of 48 days
42033, Narrowing down filtering small categories using threshold
27467, Data Cleaning
18964, Display the variability of data and used on graphs to indicate the error
4311, Survived or died
20444, previous applications
32352, Loading pre trained word vectors
459, Mean of all model s prediction
24271, k Nearest Neighbors algorithm k NN
34089, Apparently there are some price outliers we need to remove for visualization
28914, We ll back this up as well
7458, Let s take a look at the Age column
19932, SibSP
21738, When the model is finished process the test data and make your submission
10900, Imports
31056, Latitude Longitude
38828, Define model and optimizer
29992, Splitting the data
4511, Considering the number of Outliers and missing values also we are discarding these variables
4237, We can now evaluate how our model performed using Random Search
21778, Missing data
39774, Splitting into train and test with sklearn model selection train test split
444, there any many features that are numerical but categorical
40648, encoding numerical features
14271, Cross Validation
34618, Model parameters initialisation
39689, Remove punctuations
31741, Hue Saturation Brightness
1242, Submission
26044, Defining the Model
12028, one more thing we may experiment as we know Xgboost LGBM Adaboost and Gradient boosting fits well so let s try to combine only these four using stacking
42461, NOTE I take a sample of the train data to speed up the process
8721, Summary
5129, Model Ensembling
27650, Set data features and labels
39436, Remove Object type of feature from train and test datasets
12598, Data
38180, Finalize Model for Deployment
4267, SaleType
28826, Extract the dates of state and school holidays and format them in a DataFrame adhering to the need of the prophet
34912, Count users by
29950, The feature importances look to be relatively stable across hyperparameter values
23333, Ensemble Performance
24865, Before moving on lets split train df into X and y to prepare the data for training
6069, MiscVal I would say drop it
5455, Stack them into a single array
26289, Implement the update rule
4975, Family Size
39437, Visualization
7153, Loading Required Librarys and datasets
13690, Age
18459, Embeddings I ll move on to loading and using the embeddings tools
7652, transform skewed features
17378, We model our testing data the following the same way we did with the training data The following steps
10047, Reciever Operating Characteristics
40282, Define dataloader for tpu
21425, MasVnrArea
26502, We ll get back to convolutions and pooling in more detail below
14928, Naive Bayes
11021, Completing missed values in Embarked feature
5966, Child feature
25342, Parameter and Model Selection
11176, our column count went from 216 to n component value
7216, Alley Null Values
20738, GrLivArea Column
25038, Looks like customers order once in every week or once in a month
27585, that we have an idea of what new features to construct and how they might be useful let s add the rest of them and visualize them
11780, Missing Value Treatment
40254, 1st Floor Surface Area
11153, Look for other correlations maybe all the basement olumns correlate like BsmtFullBath and BsmtFinSF1 and Fin vs Unf have negative correlation
8922, In our EDA section we found the relationship between Neighborhood and LotFrontage
26992, Apply lowerization necessary if using paragram
17807, And let s plot the Class Age grouped by the survived and not survived
37821, Remove URLs http tags
13288, We tuning the hyperparameters of the XGBClassifier model using the HyperOpt and 10 fold crossvalidation
21907, Examining highly correlated variables
34650, Borrowed from
25875, Word clouds of each class
37881, Evaluation
84, apply function in each unknown cabins
30985, Distribution of Search Values
28002, Buiding The Model Structure
35233, Sentiment variable is the theme of our data
6106, Don t forget to save the indexes of primary datasets
11545, The target variable is positively skewed we can perform a log transformation to render the target distribution more gaussian like
40101, Make submission
36291, Additionally you can view the y intercept and coefficients
36862, Perceptron
17958, NN
22935, I create an estimated age feature
5495, K Fold Techniques
42844, Italy
8113, Splitting Training Testing Data
26581, And what a good boy it is loop over dog images to view some wrong predictions
9520, Logistic Regression
19897, Bottom 10 Sales by Shop
38854, Prediction
40785, nn model with L regularization The general methodology to build a Neural Network is to
13443, SibSp don t have big effect on numbers of survived people
27288, Plot SEIR model and predict
26299, Predicting and saving the output
3299, MiscFeature delete According to the file you can use None to fill in missing values I also explored the relationship between MiscFeature and its corresponding MiscVal and GarageType tried some filling methods and even determined that the example in the test set 1089 should be filled with Gar2 but eventually deleting the feature works better for my model
10554, Reverse The Labels
32629, well the naive ba on TFIDF features scores much better than logistic regression model
25796, Some managers have entries only in one of the two datasets
6982, Discrete Variables
28721, Looking the Total Amount sold by the Stores
11419, Use Case 11 Funnel Chart
17005, There are a lot of missing values in Cabin column
527, Passengers embarked in S had the lowest survival rate those embarked in C the highest
41114, Target Variable Distribution Join Fips by Bokeh
2937, Make the Distribution Normal
18486, get Days from Date and delete Date since we already have its Year and Month
41741, Looks like they were moved but only a tiny bit
33546, Current best value is 14793
18075, Plot the images with many spikes
20843, It s usually a good idea to back up large tables of extracted wrangled features before you join them onto another one that way you can go back to it easily if you need to make changes to it
1980, apply that function
42145, Encoder
16665, Creating New Features
4749, YearBuilt YearRemodAdd GarageYrBlt sown us when year is increasing price is also increasing
28863, Pytorch Tensors
1691, Distribution plots for list of numerical features DistplotsforallNumricals
32710, Defining optimizer
28221, Model Tuning
14241, Fare Features
33887, installments payments loading converting to numeric dropping
14648, Brilliant time to test our first model
22272, Cabin
19909, Year
15837, Fare
26455, RF Modelling
32941, ik word P Xi 124 y Likelihood of each word conditional on message class
13019, Analyzing the data
42992, Modeling
32930, Predicting with images from test dataset
21920, RMSE on test set using all features 12906
42224, Data Augmentation
445, Converting some numerical variables that are really categorical type
18209, Submit to Kaggle font
16654, When I googled that names I learned that they boarded the Titanic in from Southampton
16015, Normal distribution outlier not detected
19651, Benchmark predict gender age group from phone brand
40625, we are ready to build and train out pytorch models
9960, I create this new data frame were i select the columns SalePrice and MoSold
34298, It is a 222x222 feature map with 32 channels
11424, Find
22132, The Hyperparameters that can be used tuned in CatBoost model are
28927, All
22625, Visualizing Model Performance
3772, Parch
34279, sidetrack Rent cost in NYC
6157, Create a meta regressor
28407, Compile It
33323, Define callbacks and learning rate
37538, ANN KERAS MODEL
23111, Findings Nearly 58 passengers had title Mr male of course followed almost 20 passengers had titles Miss unmarried women hence usually younger than Mrs Just over 15 passengers were married women Mrs
34939, Predict submit
15395, have a look at missing values across the board
41461, Feature Embarked
23438, Making our submission
32117, How to compute the softmax score
19257, Rearrange dataset so we can apply shift methods
20815, SUBMISSION
4828, LotFrontage Linear feet of street connected to property
5840, We use log transform to make them normally distributed
35426, Compiling and fitting the data in the model
30838, Lets look for the top crimes of San Fransisco
42219, The next step is to feed the last output tensor into a densely connected classifier network a stack of Dense layers
29114, train fit one cycle for 5 cycles to get an idea of how accurate the model is
36585, Use more training data
37001, Number and ratio of orders from the three datasets
31302, PCA decomposition
8767, Survival by number of sibling or spouse
28459, Analyzing columns of type float
7360, Leave first 40 features
39736, SibSp and Parch
15631, Visualizing age data
17794, check the correlation between family size and survival rate
38633, write a function to display outputs in defined size and layer
5002, Numerical Features
32015, There is one Ms so we assign Ms to Miss category and NaN to all remaining rows In below code block negates the rule
19565, Preparing the data
41533, Dimensionality reduction
38069, Bigrams Analysis
7514, Optional code cells
37361, SibSp Parch vs Survived
13568, Sibsp feature
10803, presume that Fare and Class are related to Port
3616, Remove Outliers
36559, Setup
22907, Encoding and Vectorizers
8748, Validation Function
17346, Extra Trees
11326, Fare Imputation
17364, preparing submissions
6028, Auto Detect Outliers
10264, Distribution of target variable
5189, In machine learning Naive Bayes classifiers are a family of simple probabilistic classifiers based on applying Bayes theorem with strong naive independence assumptions between the features Naive Bayes classifiers are highly scalable requiring a number of parameters linear in the number of variables features in a learning problem Reference Wikipedia
32654, Whenever possible additional features related to key aspects of the problem under analysis may be created to reinforce the weight of such aspects in the regression
22973, Display Influential Subtext
15139, Checking Survival rate
26500, For this problem we use zero padded convolutions so that the output is the same size as the input Stride step in this case is equal to
14120, Correlation
41405, Contact information
2542, For this part I repeat the same functions that you can find previously by adding a categorical features
7941, Submission
19401, We are going to vectorize the text along with increasing the readablity of the text by removing the punctuations and countwords
37233, Train
21785, Inference
30583, we can handle the one numeric column
40984, Aggregate by sum and mean
24826, format
12222, The coefficients are a bit different but we did not solved much
9410, Conclusion
7589, Boxplot SalePrice vs OverallQual
36275, Pclass and age as they had max relation in the entire set we are going to replace missing age values with median age calculated per class
37989, flowers transfer learning 3
27419, More num iterations
29157, PoolQC Fill with None
6555, Survived People with more than two siblings or spouse are likely survived
17690, INITIALS AGE
32524, Model Refining Model
10396, In this project we use python package called vecstack that helps us stack our models which we have imported earlier It s actually very easy to use you can have a look at the documentation for more information
13621, pclass It be relevant remmember in the movie how the first class passengers were being taken to the boat first Hell they took the dog too
10969, Rescaling
31516, Selecting a feature set based on ExtraTreesClassifier
12809, Missing Values
11665, XGBoost
24854, We can quickly check the quality of the predictions
28944, We can check for the best model by comparision in just one line of code
17820, We predict the validation set
6518, Plot categorial features with target variable
4697, Normalisation
18540, Some correlations coefficients are changed and other are not
26862, Here I have defined two filters
7600, define data for regression models
7466, Making Predictions on Test data
9756, HasCabin
27401, Plot some sample images using the data generator
17722, The values differ from what is expected as there are people who are in Pclass 1 but paid low to no fare
35075, Complexity graph of Solution 6
7798, Create and Store Models in Variable
7246, Explaining Instance 3 2 3 3
30960, Domain
3840, fill Embarked column values
29164, Utilities Drop Feature
31584, let s check how the target classes are distributd among the IND continuous features
41789, Digits prediction
32064, Here n components decide the number of factors in the transformed data
37113, Embedding the glove vectors takes 3 mins on average
27344, item price
6622, Ridge Regression
34404, Training
11263, Visualizations
12453, This means that null values were the once I ve just replaced for the mode
36717, Pytorch Dataset Class
38909, Adversarial Validation
2, LotArea is a continuous feature so it is best to use panda s qcut method to divide it into 10 parts
13297, In contrast to majority voting hard voting soft voting returns the class label as argmax of the sum of predicted probabilities
40932, Iterative Folds Training
21362, Training on just the target label BASELINE
2290, ROC AUC Score
31115, Feature selection by correlation
19886, Lag Features
2406, Places Where NaN Means Something
9297, Predicting Survival based on Random forest model span
41486, Random Forest Predicting
25359, Most frequent words and bigrams
39000, Split the data into 60 as train data 20 as dev set and the rest 20 as test set Instead sklearn model selection train test split can be used
9363, Fare feature to ordinal values based on the FareBand
2693, look at the missing valeus in our data
16470, We plot feature importances of 4 classifiers excluding LDA and MLP
37669, Model training
33270, T1 is the same period as the one found in overall histogram
33230, Xception Nets
40188, Metric analysis
25446, Training dataset
15071, Gender
20736, CentralAir column
21074, To solve those documents where no word is present due to exception I have tried to use a Trick by taking the help of wordcloud of some other kernel writer
35368, Initialize Dataset
26369, Fitting the network
41615, that we have all of our data in the right format for all three modes of analysis lets look at the first one
655, For a final overview before the modelling stage we have another look at the correlation matrix between all old and new features
36930, Cabin
40317, WordCloud visualization
3802, Define a cross validation strategy
25381, investigate for errors
16594, Initializing Model and Training the the Model
22639, Model 4 Lasso
24771, Stacking algorithms
5347, Diplay number with guage
38888, Dataset
18510, This gives us some idea with the sort of images we re dealing with
28945, Lets use categorical boost as it performed best
38469, Doing the same for macro features
13409, Classification Error
31334, I choose to fill the missing cabin columns with 0 instead of drop it becuase cabin may be associated with passenger class We have a look at a correlation matrix that includes categorical columns once we have used One Hot Encoding
29776, Predict
40275, Zoning Classifcation vs Sale Price
15580, Evaluating classifiers
11430, Would you remember KitchenAbvGr just be dense on 1
20710, Third column
34681, Taking a look at what s happening inside the top categories
19362, Within bivariate analysis using scatter plots
32944, Plot for duplicates
40953, Making the New Training Testing Sets
17747, Imputing Missing Age Values
40082, Fill in missing values
34438, Target correction
15677, Predictions based on tuned model
43271, Fazendo as previs es nos dados de valida o
11807, Handle missing values and categorical features
9259, Predicting formatting and prep the submission file
20094, With no parameter tuning decreasing trend and yearly peak are correctly predicted
22395, In addition to NA there are people with very small and very high ages
40936, To tune the hyper parameter for TabNet we need to make a small wrapper The fact is in TabNet PyTorch ai tabnet implementation TabNetClassifier does not have a get params method for hyperparameter estimation yet
14917, Convert Title column into category variables
130, we have our confusion matrix
18252, Train test split
4145, CCA on Titanic dataset
14354, Male and female distribution in Survived Passengers
6167, Name
7610, Loop over Pipelines Linear
4413, Check that missing values are no more
30828, Getting the Libraries
6073, Conclusions
43019, Modelling
42952, Parch Feature
35888, Separate submission data and reconstruct id columns
30638, Dealing with missing age
13097, Survival among various categories
19456, Second Hidden Layer
13374, Here 0 stands for not survived and 1 stands for survived
1336, we can safely drop the Name feature from training and testing datasets
22481, Cluster plot
1677, Peek Data Setting the context view
10547, XGB Regressor
23662, Rolling Average Sales vs Time per dept
12822, Lets clear our vision by another graph
41469, Leaving Embarked as integers implies ordering in the values which does not exist
35509, Before the feature engineering part train and test data have been merged
1815, Modeling the Data
9510, Data Analysis
33884, Bureau balance loading converting to numeric dropping
19388, Finish data preprocessing
16634, No of values in each category
21217, we tranformed the input vectors to matrices to get images like this
20945, Data visualization
13463, Exploration of Age
25162, Plotting Questions based on there frequency
13771, For men it s better not to be alone whereas women have higher survival probability by not having family on the boat Survival probability increases for men when they have a large familly In general having too large familly e 5 members reduces the chances of survival For being a child or not its the same constat as for being alone yes for man no for female
21123, look at missing values per variable starting from numeric features as they usually play decisive role in modeling
39676, Check the shape of the image we chose to paint and store that shape in a variable
20940, Evaluation
14671, Random Forest
13730, Combining SibSp and Parch to Fam
11828, Log Transform on SalePrice
24593, TESTING DATA CLASSIFICATION
13526, Import libraries
21134, Indeed our guide informs us that
34415, Number of words in a tweet
11845, Porch
29548, Punctuation is almost the same
38561, Model Functions
14895, Obviously survival rate of female is much higher than male
13210, finaly we can do the plot without problems
1935, Heating and AC arrangements
34179, CNN
30253, Run cross validation to find the most appropriate nround value
15508, Only one missing value
25403, ACTIVATION FUNCTIONS
9600, Group by
40452, BldgType
27775, Inspect your predictions and actual values from validation data
10770, Ticket grouping
5030, Interesting
22211, Time keeper
24832, General function
31523, Attempting to get the cross validation score
13464, The age distribution for survivors and deceased is actually very similar
39919, Data cleaning
36481, We save the private df in a CSV for further analysis it s up to you
12505, Tune eta
9116, Check to make sure there are no null values in our new feature neighborhoodTier
32931, Find the best epoch value
27631, 181108 where sold only once making up the vast majority of the data
27487, y hat consists of class probabilities
15193, Log Reg Xboost Svm and Others
15260, Data Visualization
6507, Check Missing Values
33863, Fitting Xgboost model
40129, Building a custom transformer to reshape the scaled images as required by the KerasClassifier
38979, training loop
29469, Univariate analysis of feature word share
930, Optimize Random Forest Regressor
19531, Creating Flat Maps
273, Library and Data
2195, Linear Regression
32697, The competition uses the following evaluation metric
41660, From section and the vizualizations in section two things are made evident
12302, Model Prediction
23556, Lets have a look at the year difference between the year of transaction and the year built
43284, Faz um fit usando todos os dados e n o apenas os dados que tinham sido selecionados como treino
5475, We now have a table of contributions Each row is a sample and every column is a field and the contribution to the predicted sale price
14073, Name Title
2221, Features from Outside
716, Again a slight increase
31272, Rolling Average Price vs Time CA
31928, We ll gain insight from the model evaluation on the test dataset
27984, Create a sample model to calculate which feature are more important
7274, Outliers
13163, CV Score
10590, XGBoost with parameters
12228, The model
16938, With this simple tree we have a way better model than the rich woman model
41060, Extract Features From Model
9018, unfortunately we are not done with the null features in the Garage Columns
1287, Dropping Unwanted Features 4 4
15871, we have all of training data again
6849, We can replace many titles with a more common name or classify them as Rare
29403, CHECK MATERIAL