14957, Use median of title group
28063, There are many Cabin info is missing The Cabin is related to Pclass We drop this feature no problem so far There are 2 entries of Embarked missing We fill it with the most repeated value S Age of many people is missing Again the simplest way to impute the age would be to fill by the average We choose median for fare imputation We use Spark s fillna method to do that For age we use more complex imputation method discussed below For now I am just focusing on the train data There can be different feature missing in the test data Acutally there is missed fair in test data we calculate median fair also We come to the test data at the end of this notebook
20735, HeatingQC column
2826, Xgboost
27896, Compiling the Keras Model
21798, Naive Bayes Gaussian
34457, TX 2
23445, First we have to separate the individual date and time for each data point into hour day month and year
43371, Splitting Data for training and Validation
25427, Merge datasets metadata
23543, Data Visualization
29706, Show a sample
4924, Define the Models and Do a Scoring
31245, Prediction
36432, Scoring
33799, Exterior Sources
271, Library and Data
13257, sibsp and parch
39438, Model Prediction
1148, we need to handle missing data
11997, On applying grid search cv we get best value of alpha 1
41672, let s take a look at some malignant tumours from the train set
15237, Handling missing values outliers mistakes
2078, You can check yourself the public LB score of this model by submitting the file submission2
3498, Training and estimated test accuracy
16602, Numerical Variables
961, calculate the mean of all the feature importances and store it as a new column in the feature importance dataframe
42945, Evaluating the cross validation AUC score
18925, Library
34851, Outliers Detection for Top Important Variables
25996, Querying the Data
11447, Encode categorical feature columns
39821, There are two call back techniques that I tried
7662, correlation train train
38418, CNN Keras
3519, Heatmap
41020, this is the total number of passengers in the test set by WCSurvived value we are finally ready to make predictions
28925, We use the cardinality of each variable to decide how large to make its embeddings
24369, Living area in square meters
34178, Saving the model as an hdf5 file so that we don t have to re train the same model
15157, Seperating the train and test data from the concatenated dataframe
7089, We haven t gererate any feature from Parch Pclass SibSp Title so let s do this by using pivot table
29853, we specify n estimators 100
28415, Visualization of a single decision tree
9077, it looks like most rows have values for both the Exterior1st and Exterior2nd only 1 null In addition it looks like most houses are made of VinylSd material Since we are feature engineering these 2 columns into multiple True False columns of whether the house is made of the material True False we don t need to fill the rows with Null since they be false for everything do it
37651, Create Model
12367, Electrical
38732, Since all of the passengers have the same ticket number we can conclude that the fare was calculated for the entire group and not each individual
11236, let s first use RandomForest to train and predict
11311, Null Value Calculation
13162, Model Selection SVC
14012, Analyze by describing data
10818, start from round Age
9161, OverallQual
1878, Gender
19141, Model 2 with GradientDescentOptimmizer
34750, Building model without pretrained embeddings
20437, Go to top font
14535, Logistic Regression
28450, CHECKING DATATYPE OF EACH COLUMN
5243, Transforming The Skewed Features
9203, Family Member Count
15633, Survival by Deck and Gender
19360, Distribution of numerical attributes
19627, Rather fitting dataset individually to every ML algorithm Pycaret gives feature to compare between them directly
28712, Importing the Datasets
22633, Prepare final dataset for Modeling
34152, Data wrangling
9240, Whisker plot overallqual saleprice
20469, Days from birth distribution
14919, SibSp Parch
763, Train the classifier
12628, Cross Validation
10994, Import data and explore it br
33804, There are 35 features with individual features raised to powers up to degree 3 and interaction terms
23241, Mean Absolute Error achieved is 1278
22696, Lets Try to Predict Other
28403, One Hot Encoder
2504, Age band
13884, Passengers Class
21023, Replacing words in a text document
11100, Percentage of null valued features in Train data
23691, Load data
6874, Combining Data Sets
12704, I stick with these bins for now
823, Conclusion from EDA on categorical columns
19166, SVD on tf idf of unigram of product name features
7689, Preprocessing
27359, I remove the negative values as they cause noise on the data
4997, Not much action in the 80s apparently
14243, Exploratary Data Analysis
32617, Exploring the location column
23794, When I started my journey of becoming a data scientist and honing my skills on kaggle as embarrassing as it was I could not find where the data is
37112, Code in python
23529, Training and Evaluating
18910, Embarked feature
22918, Among the top 50 mispredicted tweets only 4 are false positive
42404, Preparing the submission file
18011, Training set
18645, RENTA
5445, Split into X and y variables
10355, Hyper parameter Tuning
24123, Gerando Sa da
25726, make a directory for our datasets after unzipping
18000, The correlation matrix measures the linear dependence of the features and it is desirable to have features that have little or no depedence on each other
20248, It s time to use One Hot Encoding
24164, Top Mask Position
36482, Since we know the files that we be loading ahead of time we be making a list of some basic information about them then iterating through that list
39247, Format dataframes
31071, WARNING The Implementation of this cell takes resources
16781, Support Vector Classifier
14406, Map values in test set also
20615, Import libraries and functions for cross validation and metrics for accuracy
10986, The Last Step is to save the file
26691, BIRTH EMPLOTED INTERVEL the days between born and employed
42027, Spliting the strings per alphabet
30823, Full overlapping
32096, How to stack two arrays horizontally
13106, Logistic Regression
42291, Submit To Kaggle
2526, Bagged DecisionTree
42771, One Hot
5908, u ANN u
35850, Lets have a look at our data
17276, Decision Tree
7865, Supposing that the categorical values such as the Name the Cabin the Ticket code and the ID doesnt have any relationship to the fact that the passanger died or survived
9257, Random Forest Regression
10419, SHAP Values for one house
15868, Train model
5749, The algorithm decided that the attributes sex age and fare are the most important and were decisive
25268, Image Data Generator
8350, As noticed already before the class 1 passangers had a higher survival rate
29880, Without KFold
18322, Train and Validate Different Models
41810, Visualize model performance
20478, Credit status
35085, GridSearch Cross Validation
23481, Stopword
15396, There are 2 missing values for the port that the passenger embarked on in training data
12842, Decision tree
42101, Plotting an example of Image data from the training dataset
4051, Embarked Analysis
40650, XGB with hyperparameter tuning
24830, PCA
35222, Since we have 1 NULL row we remove it from train data
20724, HouseStyle column
31111, there is no obvious increasing or decreasing trend
6614, Split the Train and Test set from df final
5581, Data Cleaning
10961, MSZoning Utilities Exteriors Electrical Functional Utilities and SaleType
23555, It looks like people start preferring larger houses
5351, Diplay relationship between 3 variables
39130, Labels Hot Encoding
2163, At this point our model
7252, Pre process data
18239, Backpropagation Chain rule
982, Interpretation
30875, Plotting
2018, For our models we are going to use lasso elastic net kernel ridge gradient boosting XGBoost and LightGBM regression
9056, FullBath
36206, Using the facets layer to compare all the curves
23429, we do a bigram analysis over the tweets
3243, Datatypes and its distribution
4060, Which variables are most correlated to the age We can check the correlation table to get the answer
1814, Fitting model simple approach
32424, How the schema of this example looks
19344, Using a CNN
42520, Linear Model
16647, Importing Libraries
21417, NaN imputation be skipped in this tutorial
39098, The Coleman Liau Index
2480, Using another K
28611, We have 3 classes with high frequency however we have 3 of low frequency
13095, Frequency distribution Continuous Variables
25442, Confusion Matrix
4905, Looks like our Data is Skewed Towards Right
16720, sex
12199, Short note the custom dummifier
39921, Before cleaning the data we zoom at the features with missing values those missing values won t be treated equally
14517, Sex
12406, LotArea
12554, Model Evaluation
9337, But we can decide for example to half the number of bins to map our feature to obtaining
26540, Submission uncomment the lines below to submit
1651, A clean categorical feature here with 3 categories
35249, How can we utilize jaccard metric here
11495, Since Utilities feature have almost same value so we better remove it
33735, There are two common situations where one might want to modify one of the available models in torchvision modelzoo
16674, Making A Submission
13658, Embarked
2034, Age
42866, Prepare a KerasTuner search space
23185, Findings
19263, MLP for Time Series Forecasting
6246, we have missing values for Age Cabin Embarked and Fare
49, Correlation of features with target
16626, Feature Selection
9328, While if we correct for the mentioned sparsity we get
34096, Symptoms
7527, Plots
10202, Lets fill the null values with the mode of the respective columns
41739, that the embeddings have been trained lets check on how they were moved and if the different POS versions were disambiguated
6759, Checking Skewness for feature LowQualFinSF
23664, modeling
11563, let s check whether the resuduals are normally distributed around 0
19283, Make predictions
5128, Feature Importance
40300, Number of characters distribution as well is right skewed
39864, GrLivArea GarageArea TotalBsmtSF 1stFlrSF YearBuilt YearRemodAdd Numerical columns
19770, Train test partition
4199, For some variables it s difficult to know directly if there are ordinal or non ordinal
11909, Lets create a new feature column by combining sibling spouse parent children column
27843, Plot loss and accuracy
9003, Continuous Variables Distribution
22553, 0s
43028, Not that many mistakes
36342, Implement Forward Propagation
12318, Finding Missing values
6668, Feature Selection using RFE Recursive Feature Elimination
20913, Testing model
6196, Support Vector Machine using Polynomial kernel
28472, create a feature called season to know in which season transactions are high
32659, Once all numerical features have been preprocessed it is important to verify the correlation between each numerical feature and the dependent variable as well as correlation among numerical features leading to undesired colinearity
9170, Much better We leave this variable transformed
82, We can use the average of the fare column We can use pythons groupby function to get the mean fare of each cabin letter
5491, We can use Imputer library to take care of missing value but in this scenario only one value is missing in both columns so we update that with most frequent value and mean value in Garage Cars and TotalBsmtSF respectively
38574, Model Time
898, for DecisionTreeClassifier
22945, We have an easier feature to handle
35171, Experiment Replacement of large kernel layers by two smaller ones
10104, we have to get a Test data in Dataframe
7421, Remember that we have 76 variables for training and 75 for test before
11849, Modelling
30585, Putting the Functions Together
36139, One way is of course to tune the hyperparameters by hand
1208, Ensemble 1 Stacking Generalization
2881, Reading in the Test data
15564, to set them all on starboard as they probably gathered They traveled with ticket PC 17608 so
4412, Replacing missing values
21947, ML part Random forest
11975, Fill GarageYrBlt and LotFrontage
12072, Feature Engineering
2982, Correlation values
40862, One Hot Encoding
4879, Examine Dataset
25959, Top Selling Products
41161, FEATURE 7
2196, RandomForestRegression
31411, Updated Using resnet gives a boost of performance to LB score
3409, Create new variable Deck
21896, Utility Function
6426, From Scree plot we can conclude that we 60 PCs can explain around 90 variation of the dataset
3142, Few improvements using scalers and feature generators
28524, Fireplaces
39319, ASSEMBLE PREDICTION
11077, Feature Importances
8268, Create TotalSF Feature
38694, After Encoding
7211, lets plot a distribution plot to get a better idea of how our SalePrices are distributed
36889, reshape for CNN
18766, Due to computational limitations the size of each cannot very large too
26952, Submission Dataset
32553, Union Clusters
1124, Now apply the same changes to the test data
23710, Use the next code cell to one hot encode the data in X train and X valid
11653, XGBoost
19361, Find Outliers
38312, Random Forest
4217, Data Visualization
20107, Item count mean by month main item category shop for 1 lag
40727, Confusion Matrix
7391, the 5 unmatched passengers from kagg rest6 are added to the rest of the matched passengers in merg all2
20082, Top Sales Shop
15287, Creating categories based on Embarkment location
12898, Here we get a view of the dataset that we use in our predictions
12100, Dealing with missing values left
30353, let s display predictions for future weeks
39248, Export data
10974, Linear regression L2 regularisation
35865, Normalisation
5911, u XgBoost u
25408, METRICS
43381, Attack methods
14247, Age Continues Fetures
19130, Rounding data
847, RandomForestRegressor
27256, Extract feature importances for our Second Level
21242, The Discriminator model
29141, Binary features inspection
37704, let s look at our neurons
14935, Prediction on test dataset
1130, Exploration of Embarked Port
19309, Evaluation prediction and analysis
20179, Fitting Training Data
15330, do the same thing that we did with cabin so that we are left with the initials and can assign them numeric values accordingly
38454, But we can t use ent type directly
34915, Count words by whitespaces
28149, we use the networkx library to create a network from this dataframe
37508, That gave a score of 18667
12924, Sex
33770, The Pixel Values are often stored as Integer Numbers in the range 0 to 255 the range that a single 8 bit byte can offer
40278, Setting up the environment
3399, More Feature Engineering
25943, CatBoost
2658, Train a machine learning model
16749, PClass
30003, Submission File Preparation
31633, Lesson Learned
31804, Training the model on GPU
39036, But how many of them there are
6934, Categorical features
31745, ColorJitter
12836, Predictive Modeling
34371, Mean Absolute Error 24
29526, XGBoost
32308, As discussed before I have now categorized the Age feature into following two categories
17274, Linear SVC
25214, 99 of Pool Quality Data is missing In the case of PoolQC the column refers to Pool Quality Pool quality is NaN when PoolArea is 0 or there is no pool
41258, Fit The Model
7411, HouseStyle Locations matter a lot when considering house prices then what about the characteristics of house itself Popular house styles are Story and Story story and story nd level finished houses can be sold at relatively higher prices around dollars while the prices of story nd level unfinished houses are mostly around dollars Notably for multiple story houses level finished or unfinished have an obvious relationship with house prices
10794, double check it
19262, Train validation split
17825, Model with Sex Age Pclass Fare Parch SibSp features
27046, Patient Overlap
10445, Since GrLivArea is now normally distributed we shall look into TotalBsmtSF
32636, BONUS stacking
38096, Handling Missing Values
16397, Checking the correlation between attributes and Survived
31409, Start training using standard fastai
32118, How to find the percentile scores of a numpy array
15409, The median values are close to the means
9519, Define Training and Testing datasets
14622, Station 5 Categorical features
6904, Another way to present it
36065, Predict All Months
15322, We be searching for the initials of the cabin numbers like A B C etc
23730, Maximum passengers boarded from Port S while the least boarded from Port Q
7275, Sex Feature
32747, Comparison of 9 models including 8 new models
11858, Cross Validation Scores
9640, Most Correlated features
3413, It looks like traveling with 1 3 family members could have positively affected the probability of survival
7914, Check the missing values
33086, Dummy transformation
14967, Clearly survival chances of males is very low when compared to females
32536, Processing the Predictions
3414, FamilySizes of 2 4 are associated with a greater than 50 chance of survival per the sample
32582, The Trials object hold everything returned from the objective function in the
32880, Random forest
11056, Sex versus survival
17379, Fill Missing values in testing data
42861, Model evaluation
28206, Examine the class label imbalance
24870, that we have finished preparing the data we are ready to split it into a train and validation set using train test split from sklearn
21662, Filter a df by multiple conditions isin and inverse using
16940, Random forest
34721, Training the LGBM models on 5 separate folds and using their average prediction for the final submission
1913, This is not great we try some polynomial expressions like squareroot
32337, Tax amount
22095, Train our Model with simultaneous Validation
40185, Using my notebook
7552, Extra Tree
10576, After handling missing values we do some simple feature engineering
1595, Fare
32852, To mimic the real behavior of the data we have to create the missing records from the loaded dataset so for each month we need to create the missing records for each shop and item since we don t have data for them I ll replace them with
13982, Correlation between columns
31405, from df pass path filename to get image file we need to properly open the img when fn is passed
36608, How did our model do
41008, We now assign the label noGroup to every man and count the frequency for each Group id element of the dataframe in the new WC count column
16862, this is a classification problem
33731, Splitting the dimension of box in the formate xmin ymin w h
12134, Training the model 2
1859, Linear
23557, Seems there are too small houses
7007, Porch areas in square feet
14357, Distribution of Classes of Non Survived passengers
9092, I am also interested in comparing PUD homes verses not
14174, Before deleting the columns with prefix Deck and the AgeGroup column I check the accuracy score with and without these columns in 3 classifiers as a test to make sure removing them is beneficial
487, Correlation in Data
30676, I have preprocessed the text and just load it to increase re iteration time
1954, Creating Dummy Variables
21141, All the data analysis be done only with use of train data
21510, Try adding some gaussian noise
688, Scaling the numerical features below is important for convergence in some machine learning algorithms
7916, Check remaining missing values
6384, are used to visualize the main statistical features of the data mean value mode and
36260, looking at some satistical data
8698, In this section of the notebook I have handled the missing values in the columns
25037, Seems Satuday evenings and Sunday mornings are the prime time for orders
32264, Relationship between variables with respective to time
3694, Here we first use the numpy module namely the mean function
5606, Find
13175, First let s start visualizing missing values percentage proportion in each variable
6583, Fare per Person
40918, Functions to deal with Almon Lags
32996, Compare Ordinal and PCA
39243, Analysis of item categories
16752, Embarked
16392, Creating Submission File
33856, Distribution of the token sort ratio
2946, Drop features with with correlation value more than
38900, Imputing Missing Values
40804, Data Analyze by pivoting features
34759, Example
25859, Baseline Model Naive Bayes
12430, Without regex and using apply
12116, Predictions for submission
7725, Imputing LotFrontage with median values
15544, Time to train the test data
17919, Exploring the data further
8152, Lasso Model
38647, Age
20756, MiscVal column
23431, Removing urls
19146, Handling null values
10741, Use heatmap to check the correlation between all numeric variables
8959, Training model on training set
27432, Dropping first e
24233, Saving the model in Keras is simple as this
5462, Define our Metric
18098, After preprocessing we have managed to enhance the distinctive features in the images
35402, Training dataset
18949, Relationship between variables with respective to time with custom date range
25730, use torchvision datasets
39675, Define the optimizer to use giving it a learning rate and specifying what loss function it should minimize
19006, Tune the weights of unbalanced classes
19148, Modeling
20476, Go to top font
9802, Stack Model 2 After Manual Multicollinearity Check
32153, How to convert numpy s datetime64 object to datetime s datetime object
13408, Classification Accuracy
21401, Number of Kfolds
15438, And finally the Gradient Boosting Classifier
14387, Create a combined group of both datasets
22234, Confusion Matrix ile tahminlerin do rulu u Prediction verification with Confusion Matrix
26672, House Type
219, Library and Data
38545, dot Tpng tree dot o tree png
33235, Feature Extraction using pre trained models resnet50
9414, Feature eng Bins font
16375, Exploring Embarked vs Survival
1347, We can also create an artificial feature combining Pclass and Age
25910, Model Evaluation StratifiedKFold
39009, now write out the submission csv file
571, First Voting
71, Datasets in the real world are often messy However this dataset is almost clean
15162, Data Formatting e Discretization Datatype Coversion
28318, identifying the missing value in bureau balance
43119, make predictions
40411, Latitude Longitude
11768, Exploratory Data Analysis
18527, Training and validation curves
5682, Check if there are any unexpected values
3546, what if we use this function to visualize more precisely what s happening
24942, Q Q plot after MinMaxScaler
27240, Confirmed Cases
29735, The validation AUC for parameters is 0
36253, Extracting Features
9365, Predict Survived with Kears based on wrangled input data
18403, Trigrams
39747, Logistic Regression
33075, The two dots on the right of the graphs might be some outliers
4575, workout the numerical features
12645, k Nearest Neighbours
14383, Feature Age
1886, This IsAlone feature also may work well with the data we re dealing with telling us whether the passenger was along or not on the ship
15588, Well
40858, Since thse variables are highly associated with SalePrice mean SalePrice should be different across the classes of these categorical variables
17572, K Nearest Neighbours
37620, Numerical Variables
37347, Compile Your Model Fit The Model
27571, Well
23574, Optimizing Neural networks through KerasTuner
27213, I ll do some very basic preprocessing like
32109, How to pretty print a numpy array by suppressing the scientific notation like 1e10
29367, KERNEL SVM
3585, Changing OverallCond MSSubClass into a categorical variable
21408, Show AUC performace of best pairs of features
20394, Bernoulli Naive Bayes Model
19322, Output Visualizations
9482, Decision Boundary
37404, Read in Full Dataset
18136, If you can afford it 10 folds and 5 runs per fold would be my recommendation Be warned that it may take a day or two even if you have a GPU
17571, Random Tree Classifier
39313, Import and process training set
36427, Dummy Encoding
12932, Correlation Heatmap
5144, Setting Seed
21387, Model evaluation
8338, ee how the important features are related to our target SalePrice
4234, let s try the same but using data with PCA applied
4464, Name Title mapping
6879, Reading in data from the competition page
29724, Submission
34527, Seed Features
7266, I thought if free passengers could have ship personnel
14183, Cleaning the dataset
9371, BsmtQual Evaluates the height of the basement
7772, Voting
38631, That s it make the submission
2440, Trying out keras
32309, Find relation among different features and survival
33816, Model Interpretation Feature Importances
1051, Last thing to do before Machine Learning is to log transform the target as well as we did with the skewed features
27162, SaleType Type of sale
25279, We have to try and get the dataset into a folder format from the existing format which make it easier to use fastai s functions
35837, First create training data
42843, Germany
27754, Removing URL s
33666, Not sure about timezones
3392, Simple Tutorial for Seaborn
40313, Data Profiling
10382, the test set also needs attention
14412, I ll use Kfold cross vaidation to evaluate the model scoring method wil be acuracy so lets initialize Kfolds first
6222, for numerical columns filling NaN as median value
318, Prepare for model
12927, Embarked
36836, Train the Neural Network
28996, Features with multicollinearity
6077, Fare
7350, We need to adapt the test dataset in order to be used by our model
16680, There are null values in the age cabin and embarked section in the training data and in the fare as well in the testing data
40949, We now feed the training and test data into our 4 base regressors and use the Out of Fold prediction function we defined earlier to generate our first level predictions
6130, Missing sale type
4987, Fare analysis
13596, Final Predictions
32010, So in Ticket column there are 681 different entries in 891 rows This situation is named as High Cardinality We can drop Ticket column from both train X and test
18087, The most green images
1527, Pclass Feature
39116, RandomForestClassifier
12529, Encoding Our Data
5137, Continous Variables
31599, EDA ON THE CLASS DISTRIBUTION OF THE TARGET VARIABLE
42618, Training the network
15471, Correlation of Categorical Features with Survived
3876, This is unexpected why does building a garage after the house is built reduce it s price I know that this assumption is a stretch but let me know if there a reason behind it
8292, Fit model
43116, build model on oh train oh valid
9075, There are a lot I next wondered if the order of the covering types mattered
38722, so now we are going to create the generator for the DCGAN
38569, Variance Threshold
4478, We first sort out our training and test set
23003, Compared to other states TX stores have similar tendency regarding registered entries
32518, Compiling the Model
12695, Well that s a fare from ideal view There is quite a severe left side skew which probably won t pair up all that well with Machine Learning algorithms
19393, Train Word2Vec model
37143, Inference Code
23892, Bedroom count
5685, Ticket
509, Decision Tree
40842, Importing Packages and Collecting Data
7800, Stack Models
4098, Interpolation for Age
18830, add the previous averaged models here
2050, KNeighbors Model
9310, Ordinal variables and measurement errors
26459, GB Predict training data for further evaluation
11993, let s check the R squared value which is a percentage measure of variance explained by model
20853, We re ready to put together our models
42010, Sorting with conditions and get the percentage
31085, WORKING WITH TEST DATA
14815, Pclass Survived Age
20085, Monthly Aggregation
30873, The confusion matrix can be very helpful to evaluate the model
5854, Following the direction of the author we look at the plot of SalePrice vs GrLivArea identify the outliers with high leverage and remove the ones with 4 000 square feet of GrLivArea from the training set
27234, Create pipeline
24841, Simple CNN
7538, lets choose our feature attributes Name is not giving us any proper info so lets drop it Cabin column have various missing values and filling it may affect our prediction so drop it to Ticket also not needed so drop it
16899, Evidence proves Master Boy
16540, The Name feature is not really useful but we can use the Title of a person as a feature so let s do it
21601, Fixing SettingWithCopyWarning when creating a new columns
18549, AGE
20611, Fill the missing value by grouping by Pclass since cabins are related to class of booking
16346, Check correlation with Survived
8095, Sex vs Survival
15920, Data Transformation
15317, Checking out Embarked Attribute
5689, Divide the Train and Test data
29519, font size 3 style font family Futura let s have important data from Dataframe
21439, Bathroom Count
40021, Perhaps we can do both exploring the images and building up datasets and dataloaders for modelling
4785, pair the 5 most important variables according to our matrix with sale price
32425, Training on all Images
22602, Special features
11144, use the value as the improvement below this value is minimal
20235, I extracted only first letters of the tickets because I thought that they would indicate the ticket type
30647, Grid Search for random forest
18017, Family size new feature
29862, data element
10368, Numerical Values
38489, Predicted images font
13333, let s split the Name feature in order to extract the Titles and create a new column Title filled with them
7279, Family Size
11546, Visualizing Categorical Variables
5161, Decision Tree Regressor
29538, we re gonna reshape our image to let the model know that we re dealing with a greyscale image hence 1 color channel
10616, Fare is strongly correlated with Pclass
4313, Passenger Class PClass
25445, Testing Dataset
1057, Lasso regression
24807, Outliers
11282, In an earlier step we manually used the logit coefficients to select the most relevant features An alternate method is to use one of scikit learn s inbuilt feature selection classes We be using the feature selection RFECV class which performs recursive feature elimination with cross validation
37633, Instead of writing all of our code for training and validation in one cell it can be helpful to break the different parts into functions
33283, Family Assembler
13340, we are using the get dummies function to convert the Ticket Letter column into dummy columns that can be used by our future models
12507, Finished Submit Your Tuned Model
30870, Compiling the model
41931, We ll also create a device which can be used to move the data and models to a GPU if one is available
10684, Train test split
31607, feature X contains 8 pixels 784 28 28
15596, Statistical Overview of the data
27076, Part of Speech Tagging for questions Corpus
9514, Fare
32373, Getting Image Attributes
12561, let s ask some questions and get to know basic trends from data
16357, Choose and submit test predictions
14775, Survival by Pclass Socio economic status
13465, Considering the survival rate of passengers under 16 I ll also include another categorical variable in my dataset Minor
6396, Sales Price Analysis
35512, In this part I would like to create new features
11123, FEATURE ENGINEERING IN COMBINED TRAIN AND TEST DATA
6024, Understand the Target SalePrice distribution
6844, identifying the missing values
5513, Logistic regression
40264, Total Rooms Above Ground
43275, Avaliando o desempenho do nosso modelo nos dados de treino
39091, Question length
10946, 1again check size of data sets
40934, Tabular Modeling
2292, Making a pandas dataframe from either a list or a dictionary
29954, Price outliers are generated by some specific brands
6811, Neural networks are more complex and more powerful algorithm than standars machine learning it belongs to deep learning models To build a neural network we are going to use Keras Keras is a high level API for tensorflow which is a tensor manipulation framework made by google Keras allows you to build neural networks by assembling blocks which are the layers of our neural network For more details here tutorial deep learning in python is a great keras tutorial
34352, Besides the embedding 3 fully connected layers
32152, How to find the index of n th repetition of an item in an array
8150, Decision Tree Regressor Model
28928, Test
14445, Update Fare with ordinal values from the FareBand table
10858, Creating new features as per our intuition and dropping the other columns
5618, Code Output
2216, Blend
13255, Age
4847, Lasso
37005, let s combine them in a single dataframe
4058, it s necessary to normalize the data
37092, Statistical Functions
9368, Build model and predit
36571, Fine tune
8362, Pclass because there is only one missing value in Fare we fill it with a median of the corresponding Pclass
14623, If we would predict a probability of for a male and for a female to survive we would expect gradient contributions that point into opposite directions But actually we obtain
7424, To address the multicolliearity problem I apply PCA to decrease the number of variables
18410, Metric
26562, take another look at the scatter plot
27297, Check the total US trend
32203, Create a test set for month 34
8672, Torch model
28399, PairPlot
15457, Title
30394, Oversampling
32684, Cross Validation
30459, Split datas in train and test set
17570, Decision Tree Classifier
39109, Fare
6992, Original construction date
3844, option1 replace all missing age values with mean
30370, Test PyVips
32383, Loading Modelling Tools
26879, Score for A1 17688
9935, Therefore the missing age values are handled p
18184, Blending
28420, Item id
13461, I ll also create categorical variables for Passenger Class Gender and Port Embarked
212, Library and Data
41582, While using transfer learning in ConvNet we have basically have 3 main approaches
13151, apply title wise age filling in the transformations back in test data too
6727, YearBuilt Vs SalePrice
29581, as standard we ll load the dataset with our transforms
25022, Anyway when you want to use this mask remember to first apply a dilation morphological operation on it e with a circular kernel This expands the mask in all directions The air structures in the lung alone not contain all nodules in particular it miss those that are stuck to the side of the lung where they often appear expand the mask a little
1566, Fare
21513, Lets checkout the message length whether the sms is a spam or not
42046, factorize
1111, Evaluation
23062, we need to create few additional parameters for our model
7987, Linearing And Removing Outliers
37066, Impute Age
8122, Perceptron
10830, it is time to update ticket number with new category
32511, Pre processing the predictions
3786, Correlation Matrix of SalePrice
3536, PointPlot
26259, Training the Neural Network
26262, Reading test file
3161, The next step is to transform numeric variables to produce better distributed data
17743, Unfortunately it does not look like tickets were issued in this manner
10270, Neigborhood There are quite a few neighborhoods Surely these can be reduced to a few classes
4951, Modeling
32567, Diff Common marking
24177, Things look good use that to train on the whole data set
43211, Model training visualization
7032, Type of Paved driveway
42248, Transforming data to reduce skew
33552, How to win
40055, let s try to find good learning rate boundaries
35053, Train the model
26406, The survival chance of a passenger with 1 or 2 siblings spouses is significantly higher than than for a single passenger or a passenger with 3 or more siblings spouses
42134, Quadratic Weighted Kappa
13280, Support Vector Machines are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis Given a set of training samples each marked as belonging to one or the other of two categories an SVM training algorithm builds a model that assigns new test samples to one category or the other making it a non probabilistic binary linear classifier Reference Wikipedia
42097, Dividing the train data set into dependent labels and independent pixels features
22233, Periyotlardaki do rulama kay plar Validation loss in epochs
32165, Generate test predictions
15519, we can look at Ticket again
36194, The date in which the customer became as the first holder of a contract in the bank can not help fix the issue with missing data in the new customer index
43043, Any help on this is appreciated
35826, To learn about boxplot you may follow the links
35480, ORB Oriented FAST and Rotated BRIEF
25787, there are 2 missing values here so let s replace it with most frequent value
35818, Add sales for the last three months for similar item item with id item id 1
9651, Missing Data Percentage Visualization for Clarity
12104, Evaluation
10525, We have removed co linearity from our dataset manually examine each feature and remove non linear features from the dataset
17014, PassengerID
27875, Average daily sales
11683, Feature Engineering
8147, Defining Training Test Sets
4270, These where Sal but in order to have the same structure I put them in Sev
541, New Feature Family size
21626, One hot encoding get dummies
21843, RNN Architecture for MNIST Classification
28865, Attention
7478, 3d ParCh
7628, list of models
15574, Update After trying a few classifiers I realize that this really happens The remedy could be to subtract one form TicketGroupSurvivors where a singleton survived
26068, Another experiment we can do is try and generate fake digits
23598, I was first going for merging the two sentence sets into one so that the system is trained over all the words sentences document vectors and I would not be able to do it without concatenating both of these
43347, We fit our model to X train and y train datasets to train our model
15830, KNN Classifier
9879, Correlation Between Parch Survived
7479, 3e Fare
9145, Great Since there are only 2 and 3 missing values in these columns respectively and their values have the same range I just set the null values to the value of the other
17688, AGE SURVIVAL
34289, Classifier
37799, Model Evaluation
9700, Concating numeric and categorical features
18500, Kaggle Submission
79, combine train and test data first and for now assign all the null values as N
16282, Feature Importance
12697, The full names as they are not be helpful to us although there s probably something useful within title e
679, The easiest method to combine different classifiers is through a Voting Classifier It does exactly what the name suggests each individual classifier makes a certain prediction and then the majority vote is used for each row This majority process can either give all individual votes the same importance or assign different weights to make some classifiers have more impact than others
6733, GarageArea Vs SalePrice
16122, Perceptron
1109, Logistic Regression
11245, Multicolinearity
31049, Get ID
7298, The plot of SalePrice is skewed in nature
11465, Support Vector Machine SVM
38989, Traing for negative sentiment
2107, This residual plot is fairly problematic and we address it later on
19626, Loading and normalizing the dataset into Pycaret
4554, Utilities For this categorical feature all records are AllPub except for one NoSeWa and 2 NA
3746, LonFrontage
12303, Random Forest
33655, Define Gini Metric
35337, Fitting on the Training set and making predcitons on the Validation set
27601, Submission
11223, Show both adjustments
6873, Categorical Nominal
31107, One Hot encoding a Column ith Sklearn
19702, check if we have NaN values in our dataframe
25774, Male is more than female in the Titanic
22912, Evaluate and Improve Our Model
17388, Dendrogram
28692, Target Variable
43127, Predicting actual test set
13460, According to the Kaggle data dictionary both SibSp and Parch relate to traveling with family
42000, Sorting a certain column with a certain condition
23121, Findings
21151, First let s prepare the data set containing predictions of our future models
19517, Checking Number of Partitions
14098, Support Vector Machine SVM
15103, We were able to fill in the gaps for Age and now have 1309 records
2243, Pipelines
2033, Class
7229, Miscellaneous features missing values
18947, Relationship between variables with respective to time
40429, Examine the variable importance of the metalearner algorithm in the ensemble
24188, For example here are some mistakes and their frequency
40871, Optimize Elastic Net
15168, We can do even more things with these values
40195, Split data into train and validation
9614, Box Plot
29993, Building a scoring function
40284, Focal Loss
12785, Define the Training as a Function
1248, take a look at the distribution of the SalePrice
8933, Label Encoding
21486, The difference is very significant and suprisingly so While spawning more threads than there are CPUs available isn t helpful and causes multiple threads to be multiplexed on a single CPU it is unclear why that overhead causes xgboost to perform slower by several multiples
26851, Bigram Insights
39084, Embeddings
8756, See some start data
38959, It s always strongly recommended to check your input pictures into your model after augmentations to ensure that not strange things happen
5314, Preparation for creating functions
16365, Sex Male Title Mr Pclass 3 lowest chances of survival
19321, Training the Model
2419, First let s take a look at our target
40800, Data Pre processing
21663, Reverse order of a df
1553, Pclass
43018, Feature assembly
35071, Below there is the definition of the standardizer function which preprocess each one of the images as they are fed to the CNN
39947, Cross validation
16362, Name could be one of the interesting features as it directly doesn t provide any value but it opens up the scope for feature engineering from it
9009, Set Fireplace Quality to 0 if there is no fireplace
24539, Total number of products by income groups
41762, Not surprisingly this search does not produce a great score because of 3 fold validation and limited parameter sampling
39194, Modelagem
37222, Interesting Very few vocabulary words are in common with the paragrams vocabulary
1008, How about having another look at our features
40980, review How to count rows in each group by using
15407, let s create a Married column and assign titles that indicate the woman is married the value 1 and 0 otherwise
38977, LSTM model with Pytorch utilizes GPU
27757, Removing Punctuation
11737, Here we are looking mostly linear again although it looks like it might be following a little bit of an exponential path
1429, I put here some hyper parameters tuning with n estmators max depth and learning rate parameters
4571, NA for PoolQC means No Pool
6500, all necessary features are present in both train and test
34265, Compare one example again to verify that the normalization was done properly
8963, data engineering
21485, try different submissions with different thresholds
3254, strong Number of Houses Sold every Month strong
5527, Create Pclass Categories
9852, XGBoost
35545, use pipeline to chain multiple estimators into one
7692, Linear regression
2827, optimizing hyperparameters of a random forest with the grid search
41369, Sale Price HLS Low Lvl Bnk
24658, Select only March and April
32724, Clipping some outlier prices
35930, SibSp Parch
22695, Binary Classifier s
8381, Looking for missing values
42254, Use scikit learn s function to grid search for the best combination of hyperparameters
25939, Below is a comparison of w o PCA and w PCA correlation after PCA transformation it looks much better in terms of high correlated variables
32680, The commented code below enables generating and saving a contribution to the competition
33669, Year font
33849, Analysis on few extracted features
43263, Importa a classe DecisionTreeRegressor do scikit learn
28024, Training Word2Vec
33444, ExtraTreesClassifier
9707, Ridge Regression
26101, Random Forest Cassifier
28460, First I would like to take all columns of type float in a list
13760, Model Data
7894, and drop the respective columns
8992, We expect that if the MasVnrArea is 0 that the MasVnrType is none
19353, Reading input file
12779, Submitting
27019, Training in progress Please do not disturb
35186, the highest correlation is among cont11 and cont12
32094, How to reshape an array
27644, that all of our data is ready we can go on to training the classifier
18130, get the RandomForestRegression model s assessment of the Top 5 most important features
26505, that the image size is reduced to 7x7 we add a fully connected layer with 1024 neurones to allow processing on the entire image each of the neurons of the fully connected layer is connected to all the activations outpus of the previous layer
40328, Locations
29108, To use FASTAI library we need to feed our data into their ImageDataBunch function However
20261, Steps of Linear Regression
29365, K NEAREST NEIGHBORS
3560, Run on test data
8661, Instructions
5166, Download datasets
37234, Evaluation
39134, Input Layer
23552, Submition
37921, Lasso
13427, Define custom Scorer function
23739, Correlation and Feature Importance
26316, Gradient Boosting Regression
4115, Exploratory Data Analysis
36756, A training set of 42000 images
28759, Stepwise Model Selection
34859, RF
39959, Averaging Regressors
2116, It looks like the target encoding with the mean is very powerful and not very correlated with the other features
25857, Splitting dataset
25278, To get a set of transforms with default values that work pretty well in a wide range of tasks it s often easiest to use get transforms
19553, CNN Model
39146, all folders are created The next step is to create the images inside of the folders from train
36584, Visualizing and Analysis
24887, convert non numeric features to categorical values
2804, get cat feats Returns the categorical features in a data set
13972, Embarked vs Pclass
19779, Let s take a look at the architecture
17339, XGB
22151, BIMODAL DISTRIBUTIONS
1578, because we are using scikit learn we must convert our categorical columns into dummy variables
22087, Normalization
30427, Load Models
7921, Combine features
25245, Prepare submission
6112, Ah what a pity mistake
11766, we are using a stacked model of our Elastic Net Kernel Ridge and Gradient Boosting Regressor with our meta model being our Lasso model in order to predict our outcome
579, Quantitative 1stFlrSF 2ndFlrSF 3SsnPorch BedroomAbvGr BsmtFinSF1 BsmtFinSF2 BsmtFullBath BsmtHalfBath BsmtUnfSF EnclosedPorch Fireplaces FullBath GarageArea GarageCars GarageYrBlt GrLivArea HalfBath KitchenAbvGr LotArea LotFrontage LowQualFinSF MSSubClass MasVnrArea MiscVal MoSold OpenPorchSF OverallCond OverallQual PoolArea ScreenPorch TotRmsAbvGrd TotalBsmtSF WoodDeckSF YearBuilt YearRemodAdd YrSold
15362, pclass A proxy for socio economic status
30591, Insert Computed Features into Training Data
30396, Callbacks
26389, Evaluation
6420, We can drop MSSubClass YrSold MoSold as they have no impact on SalePrice
39023, Machine Learning Data Analysis
16336, Linear SVM
35176, Compile 10 times and get statistics
22052, First orientation get some hints on what to look for in our EDA
3440, Since our Jonkheer is 38 years old let s include him in the list of titles we change to Mr
42815, Configuration
42141, Autoencoder Model
52, Random Forest
23336, BanglaLekha Isolated
19151, Write the data to file to save it for a new session
15610, Gender Feature
19977, MLP Sigmoid activation ADAM
32148, How to drop all missing values from a numpy array
3890, Range
11289, Replace missing values with most common value
3403, Normality
35421, Checking shape of training data and labels
15904, Exploring the Data Distributions
6478, Relationship with categorical features
35665, Features with alot of missing values
15748, Embarked is likely S for both
3723, Hyperparameter Tunning
38780, Express X axis of training set frequency distribution as logarithms and save standard deviation to help adjust frequencies for time trend
42356, Removing twitter handles
22772, We build the iterators
12602, EDA
32204, Concatenate train and test sets
12938, Deck Where exactly were passenger on the ship
43404, These are the parameters and their ranges that be used during optimization
15898, SHAP values for selected rows
42549, Question character length correlations by duplication label
17269, First import all required machine learning libraries
24309, accuracy vs
4591, We first convert GarageQual GarageCond and GarageFinish to ordinal numerical variables
11507, LASSO Regression
1416, TypeOfTicket vs Survived
24225, Sort the traning set Use 1300 images each of cats and dogs instead of all 25000 to speed up the learning process
6717, Relation between Continous numerical Features and Labels
25201, Evaluating our model
10405, Drop the Id column since it isn t worth keeping
1984, Great Our data is ready for our model
14802, Submission
22980, Promotion impact per store week
40385, According to the tensorflow website The tf
33144, To help the convergence of the map it is a good idea to limit the number of points on which the training be done
42218, A second Max Pooling Dropout layer now be added
4055, Defining Transformations
13824, now let s writ the model
10230, try XGBoost classifier model also
40655, I m happy to lose 5 of the features and not have to worry about a proper imputation strategy
21644, Split a string column into multiple columns
404, XGB
20788, From these regplots we have confirmed there are outliers so we decide to remove them
30566, We can test this function using the EXT SOURCE 3 variable which we according to a Random Forest and Gradient Boosting Machine
6817, There are lots of ways to deal with missing values
29102, LSTM Classic
27507, Compiling and Fitting the Model
7293, Random Forest
42550, Model starter
26670, Occupation Type
29699, lets take them through one of the kernels in next convolutional layer
12665, that we have trained the classifier and made some predictions for the cross validation set we can test the accuracy of the predictions using the following metrics recall precision f1 score and accuracy
15133, Replacing Rare value with one of clear titles
21070, Testing data comes in
3290, we create an object of this class and use it to fit and transform the train data
19581, Aggregate Sale
11544, EDA and Visualization
8087, Blend with Top Kernals submissions
39757, Here we ll try to create meta features to understand better the structure of the questions
34663, Item counts versus item prices
42609, Set Up Progress Tracking
35066, Complexity graph of Solution 4 1
407, Gaussian Process
24886, drop useless features
24357, investigate for errors
34329, Adding augmented data not improve the accuracy of the validation
14853, Family type
39456, checking missing data in bureau
29587, A solution to this is to renormalize the images so each pixel is between
34052, XGBOOST Training Cross Validation
12455, This means there s no garage
31547, replacing each missing values with the top most category
20064, Adding column item main sub category
37196, Skewed features
8891, LightGBM
1293, Prediction 5 5
27493, SUBMIT PREDICTIONS TO KAGGLE
36341, Define Neural Network Parameters
31081, UNIVARIATE SELECTION
13447, Data Cleaning
38008, All info about a single department
23806, Drop Low Info Variables
33357, Question 2 Create a decomposition plot for a city of weekly sales
18296, Data Augmentation Oversampling
41484, Random Forest Training
3471, For this model precision can be thought of as the ability of the model to pick out the true survivors and not label passengers as survivors when they in fact perished
3634, Pclass Survived Vs Age
17466, Age
40183, Long selected text are not predicted correctly
13112, Linear SVC
27463, Missing Values
449, Box Cox Transformation of highly skewed features
39408, Pclass
27316, Submission
3701, we have a model
33467, CompetitionDistance
27033, Use this mothed to predict test csv
31578, I could be wrong but in my opinion this is too much imbalanced
38045, Central Limit Theorem
33753, Numerical data processing
33617, Cleaning of test data for prediction purpose in the same manner as train data
7060, Scaling with RobustScaler
10787, I would like to check what would be the score
4569, SalePrice is not normally distributed
13472, Nearly all variables are significant at the 0
15563, Feature engineering
3968, LightGBM
36817, that we ve downloaded our datasets we can load them into pandas dataframes First for the test data
19706, Resizing the photos
17477, Voting Model
42142, simply visualize our graphs
13948, Categorical Variable
24847, As I was using an enriched dataset during the Week 2 competition I have to add the new countries to my dataframe and fill the missing data with median values
11397, Looks like these passengers share the same ticket information and were both singletons travelling first class
35143, Nice we got an impressing 98
32849, Data leakages
5809, Combine train and test
38406, Look at the digits
42249, Considering highly correlated features
42957, I think the idea here is that people with recorded cabin numbers are of higher socioeconomic class and thus more likely to survive
36141, First we have to define a dictionary of hyperparameters we would like to check
31727, Anatomy and Target
7035, Here we analyze correlation with the boxplots and missing values
24950, Dataset information Pandas Profiling
11137, Tansformation of the target variable
8430, Correct masonry veneer types
9349, Create a pipeline
38892, Forward Propagation with Dropout
38095, Missing Values
33465, SchoolHoliday
33749, Using the learned model to predict the test set
21079, The correlation coefficient for ps calc is 0 so we drop these from our dataset
42625, Fatalities percentage
25178, Featurizing text data
3295, SalePrice distribution
22404, Fill in missing with the more common status
2521, Confusion Matrix
8712, The Gradient Boosting gives the best performance on the validation set and so I am using it to make predictions to Kaggle on the test set
7992, Encoding Label
31060, Negative look ahead succeed if passed non consuming expression does not match against the forthcoming input
32107, How to create a 2D array containing random floats between 5 and 10
22782, Histogram Type
39820, it s all set now let s build our model shall we
31779, Loading datasets
4027, Interesting columns
15452, Change Survived to an integer
38967, Word2idx and embedding dicts for LSTM model
29805, Implementation of Glove via glove python
5077, How about our cleaned data and some other classifiers
155, Import libraries
32250, Visually inspecting our network against unlabeled data
1808, Dealing with Missing Values
12790, We thus export the predictions as a Comma Separated Values file
26668, installments payments
2809, Heatmap
5414, Except those with more than 4 family members basically the bigger the family size the more likely to survive
14097, Random forest output file
27135, We ll have a look at the correlation between all the features with the help of Heatmap
15861, Model definition
12975, We cannot make a prediction about survival condition by using passengers names but there might be relationship between survival rate and titles
21416, Label encoding Making it machine readable
42004, isin filtering by conditions
33783, Examine Missing Values
8803, replace the NaN in categorical with NA and with 0 in Numerical data
69, This is a sample of train and test dataset
19370, Missing null values in categorical columns
10923, To find the degree of a graph figure out all of the vertex degrees
20578, Encoding Categorical Data
36677, stop words string english list or None default
12133, XGBRegressor 2
22232, Modelin uygunlugu Fit the model
17409, Feature importances generated from the different classifiers
32339, latitude and longitude
26462, In our data formatting we generate the respective labels for dogs 1 and cats 0 for our training data File path also be collected as a column for our dataframe so that it can be used to load and train our images
4705, Features coefficients
5948, create a submission csv of prediction
15075, Age Group
22065, Findings for large spaCy vocab
42620, Import Libraries
9761, Lets check TitleGroup I saw that there are so many Mr
21227, Data augmentation
20061, Import libraries
10367, Missing Values
3451, For the sake of modeling let s add Mr
10426, Linear Regression
37686, Plot Predictions
36396, Features engineering FE
37798, Linear Regression
38781, Adjust frequencies for time trend
21436, make LotArea LotArea log
9826, Fare Passenger Fare
17602, Support Vector Machines
41941, save a batch of real images that we can use for visual comparision while looking at the generated images
7647, Generally if a variable is numerical we replace null with 0 or mean median
12827, same way we can check for unsurvied travellers family size
3583, LotFrontage Linear feet of street connected to property
31011, We have used categorical crossentropy as the cost function for that model but what does we mean by cost function
2319, Fitting a Linear Model
18314, Explore Items Dataset
2344, Gradient Boosting Classifier
32642, Warning PyCaret setup work for last imported PyCaret library
15710, Gender per Passenger Class vs Survived
24716, Eigenvector 4
15584, Only 102
9050, Garage Cars
21230, Run the following cell to visualize how many images we have in each of our datasets
7783, value counts in ascending order
21560, Binning
23882, check the dtypes of different types of variable
11138, Here is the difference between BoxCox and Log values the difference is substantial at the value of a house
32701, Here I am concating data to apply cleaning on both train and test set
5832, we have 18 object type feature 12 numeric
10948, again size of full data set
23247, We fill the Embarked s missing values with S I prefer it because it is the most frequently used
35438, The best part
33107, Submission
2263, Age
14103, Most of the children got survived
21656, Reduce memory usage of a df while importing duplicated Trick 83
32588, Continue Optimization
4955, ElasticNet regression model which basically combines Ridge regression and Lasso regression
27905, Simple Model Catch Failures Decision Trees
38515, Distribution of top unigrams
485, Defining a basic XGBoost model
6115, I think there should be a strong correlation between Garage Area and number of places for cars
35406, Applying desicion Tree
41323, The data consists of a tabular format where every pixel is a feature
3004, Ridge Regression
38549, The Meta Features Based On Word Character
24099, Limiting the outliers value
22948, visualize the Cabin Number distribution of the data
21918, Ridge model
42043, Inserting the average age per Initial for NaN values of Age
1915, MasVnrType and MasVnrArea
13738, Creating O P file
40066, Data Processing Reduce Skew on x
14891, Apply the pairs plot
22953, Show some image
26197, We use a different set of data augmentations for this dataset we also allow vertical flips since we don t expect vertical orientation of satellite images to change our classifications
34479, Perprocessing Data
16945, XGboost
14151, to examine the data types of our features
22518, Categorical encoding
25273, Display Validation Accuracy Loss
28122, Exploring Target Column
616, These are two women that travelled together in 1st class were 38 and 62 years old and had no family on board
32502, Generating csv file for submission
3680, Key Takeaways
25002, Run predictions for given test data and submit the output file in required format
25379, Evaluate the model
1705, Listwise Deletion Dropping rows
30908, We get only around 140 active features which are far away from the total features
1920, All categorical variables contains NAN whereas continuous ones have 0
19048, There are a number of Nan values within each column that we want to get rid off
27857, Handling categorial variables
15857, Create final feature set
18304, append test data to matrix with next month s date block num
1857, Lasso
22688, Prepare submission file
18526, For the data augmentation i choosed to
36352, Preprocess the Inputs
5521, Extract Title data
813, Filling missing values
7700, We try other kind of regressors such as XGBRegressor and ExtraTreesRegressor
32723, Calculating different aggregations
32720, Predicting on a random sentence
26843, DAE
15154, Filling Age null Values with random numbers generated between its mean and standard deviation
39100, Dale Chall Readability Score
33081, Another way to visualize this lack of data is to display the whole dataset as followed
28653, Neighborhood
17438, make some graphics by type
16272, Shape
10353, Principal Components Analysis
30347, Get data from
7793, reload the data so we can have a fresh start
21521, Lets visualize the Accuracy over the Epochs
41193, check if there are any numerical columns left with missing values
36266, Looking at Port of embarkation
16100, Embarked Feature
13998, Split train data into training data and validation data
20208, Standardize data
40466, Other Features
27327, it is to optimize a size of n estimators for RandomForest
3495, Parameters for the best model
40266, Total Basement Surface Area vs 1st Floor Surface Area
31684, Training Autoencoder
2995, Handling Missing Values
10124, Boys Girls and Women have higher chance of survival where the men have the lowest
13388, 0 indicates that person is travelling with family and 1 indicates that he is travelling alone
40038, Predict on whatever you like
26207, First we check if all images in the train folder are all in jpg format It is better to check because if there are a mixture of image type we may face troubles later on
29844, Transforming some numerical variables that are really categorical
20314, I think another interesting information my come by lookig at the 10th to 15th most bought products for each cluster which not include the generic products bought by anyone
32402, Submission
14840, Another interesting thing to look at is the relation between Age Pclass and Survived
7955, Tuning on input dropout
551, RFC features not scaled
35586, Data Imbalance
492, visualize missing values using seaborn
11022, Lets fill the two blanks in Embarkation attribute with S
36070, Prediction
20521, remove the outliers
3003, Ridge Regression
27970, Data set fields
39250, EXTEND DATAFRAME TO PRODUCT SHOP x ITEMS EVERY MONTH
29777, Required Imports
24972, Dataloader model
1991, use test dataset
12881, Parch and SibSp
10636, Start with
23463, Training set
21751, Weighted Averaging
12690, Fare
28235, Added one more layer with filter 128
42390, Price should be a good proxy for item id and gives it a numerical value instead of a categorical value
1928, MiscFeature
32626, TFIDF Features
41847, Moscow Price per square meter
17541, Find best parameters for XGBClassifier using GridSearchCV
38266, now we remove all the all the urls and the HTML tags
37415, Test Small Dataset
7454, Missing values in Cabin
21669, Submission
5165, Import libraries
946, Tune Model with Hyper Parameters
28500, Importing Libraries
32223, Seems like a pretty uniform distribution of all numbers 1 9 take a look at what the numbers themselves actually look like
2283, Test new paramters
26469, Lets fit our model with the training data and evaluate its performance
36703, True Positives TP True Negatives TN False Positives FP and False Negatives FN are the four different possible outcomes of a single prediction for a two class case with classes 1 positive and 0 negative
13448, now no missing value in Embarked
40975, size method counts rows in each group
12077, Modelling
13067, Decision Tree
33870, XGBoost Model
25651, Generate test predictions
11533, Discrete Variables
9943, Creating a feature which tells us that the person is travelling with someone or not according to the similar number on the tickets
19364, with reference to the target the top correlated attributes are
43062, check the distribution of the standard deviation of values per columns in the train and test datasets
15052, Same LastName doesn t make the same family but same family usually have same LastName
22145, Model definition and generating predictions
4758, Correlation coefficients
9680, Finalize Model
38833, Run inference on the test data
36573, Make predictions
33472, COVID 19 tendency in China
37432, Which are the most common words font
5306, The issue is that after your scaling step the labels are float valued which is not a valid label type you convert to int or str for the y train and y test to work
34166, we can translate the new dataset
32688, First we create generators for augmentation of training data and for normalization of validation data
10171, Count plot to comapre two feature
29549, Bigrams
16619, CABIN EMBARKED Age
27375, adding the lags to the test data
25947, Outlier Removal
14217, The Chart confirms 1st class more likely survivied than other classes
6515, As the continuous variables are all skewed we use logarithmic transformation to visualize
21953, Data augmentation
10951, Check the struture of full data set
15612, Title Feature
31624, the test set
13161, Unscaled Features
33319, Logistic Regression
35099, clicks train csv
9301, OLS experiment multicollinearity
16757, Missing data and Combine
15097, Neural Network
37172, The training error looks pretty good
41084, Albert Base
1125, Exploration of Age
7066, GridSearch for XGBoost
3846, option3 replace values with median Age of Pclass
2813, Saving the Id column for later use
27376, testing the lags impact
13605, When handle missing return nan Embarked column is replaced by two columns Embarked 1 and Embarked 2
25795, Age and Fare also include in Categorical data
26972, Define Train Model
26203, This is a serious problem that one can run into when you Normalize the bounding box it may exceed 1 and this cause an error especially if you decide to augment the images as well
27761, Removing stopwords
6201, Random Forest Classifier
33836, Pros
20264, Artificial Neural Network ANN
39076, Training
38034, take a look at these features and plot them on a box and whiskrers chart
21819, Correlations
39995, Submission
31248, Title
5547, Specify Architecture
32092, How to replace items that satisfy a condition with another value in numpy array
5709, KitchenQual Only one NA value and same as Electrical we set TA which is the most frequent for the missing value in KitchenQual
20790, Highly Correlated Features
39867, Feature Engineering
166, To be able to make a valid submission a good habit is to check the sample submission file provided by Kaggle to become familiar with the needed format
29818, Vector Averaging With pretrained Embeddings
10562, Merge Numeric and Categorical Datasets and Create Training and Testing Data
8333, let s have a look at the target distribution
26003, log loss or logarithmic loss is a classification metric based on probabilities where it quantifies the accuracy of a classifier by penalising false classifications in other words minimising the log loss means maximising the accuracy and lower log loss value makes better predictions
40409, Price
4986, Sex Age and Fare analysis
19094, Having more sibling can be corelated to less survival rate
12263, Data
21458, Split the Training dataset into Train and Valid
8944, Fixing Exterior1st Exterior2nd
37152, To find out the images that were wrongly predicted I have followed the following steps
2717, Kernel ridge regression
19439, Defining hyper parameters
14983, we can replace a few titles like there is Ms which can be replaced with Miss as they refer to the same gender group And few like Major Capt etc replace them with others
7260, Sex Feature
9208, Salutation Distribution
26548, Evaluate the model
32995, Train with PCA reduced data
12235, The residual sum of squares is the top term in the R2 metric
42950, Pclass Feature
34477, Removing id column to delete row use axis 0 column use axis 1
20800, Create TotalSF feature
1091, further explore the relationship between the features and survival of passengers
38814, Combinations of TTA
11986, plotting the correlation maps of some features say first 30 features on SalePrice
5121, Train and Validation sets split
25048, Looks like the products that are added to the cart initially are more likely to be reordered again compared to the ones added later This makes sense to me as well since we tend to first order all the products we used to buy frequently and then look out for the new products available
24845, Preparing the training data
38424, Convolutional networks
34605, Extract train features from CNN
17977, Name
31823, Random over sampling
21944, Spark
10251, Go to Contents Menu
27392, Tuning n estimators based on small learning rate
36053, Fill missing timestamps
39411, SibSp
42158, Merge train and test data
9323, For example thisi is how Foundation looks like
7883, Following the graphics below The age can be groupped into less classes
36110, Lets concat Train and Test datasets
13327, Age completing feature with its mean value div
33658, Set parameters
20921, model
19807, to encode categorical variable with k labels we need k 1 dummy variables
35629, so our digits are in a space in which every pixel is a variable or a feature
35475, Use Noise Reduction
36743, Here a dataframe is created to store the knowledge if an event exist in the next day
20026, Boxplot gives even better insights
22134, select only top 10 correlated features
42926, Dendrogram
19377, Cross validation
40116, seqstatd function returns numeric data
28527, WoodDeckSF
21517, Creating Our Model
36284, Wohh that s lot s of title So let s bundle them
37044, y test consists of class probabilities I now select the class with highest probability
10114, Again people with more number of children parents on board haven y survived neither did people who were travelling alone
8960, Creating output files
34351, There s no easy way of using the fast
27139, Category 1 Type of dwellings
9253, Linear Regression
28132, Will Implement It soon
19334, The training accuracy steadily increased and plateaued while validation accuracy is also consistent
30154, randomforest and gradientboostingregressor
15283, Survival by Age and Fare
42972, Reading data and basic stats
4646, Swarm Plot
1793, In probability theory and statistics Kurtosis is the measure of the tailedness of the probability distribution of a real valued random variable So In other words it is the measure of the extreme values outliers present in the distribution
2277, Which is the best Model
20635, lets apply this vocab on our train and test datasets we keep only those words in training and testing datasets which appear in the vocabulary
29458, Linear SVM
6706, Categorical Features
20856, We use the cardinality of each variable to decide how large to make its embeddings
30776, Train linear regression model and make prediction
18506, DICOMs
21011, Exploratory Data analysis with dabl
20617, Decision Tree Classifier
13885, Passengers Age
9105, Cool
33025, Compiling the model with the right Optimizer loss and metric
3904, Discover you Data what is look like
11710, Visualization
29982, Plot the cost and accuracy
35466, Visualiza the skin cancer Melanoma
32332, correlation map
15649, Gaussian Naive Bayes
9128, Lot frontage
2306, Python how to make a continuous variable categorical 2 ways
998, corr Fare Pclass is the highest correlation in absolute numbers for Fare so we ll use Pclass again to impute the missing values
13623, Before we start we need to divide the training data into a training set and a testing set
2415, These are all the categorical features in our data
23068, Command Center feature engineering pipeline classifier
35869, Training
43389, we choose one of these successful examples and plot its related perturbed image over a range of epsilons
33828, You can also select to drop the rows only if all of the values in the row are missing
15211, Several features requires imputation and normalization
33333, FEATURE 6 of MINIMUM PAYMENTS MISSED
38428, Improving the CNN architecture vol 2
3249, Distribution of the Target variable Sales Price
7984, Merge Porchs
11378, Lasso
37185, Importing important libraries
8753, Estimator
16878, Age VS Survival
33879, TotalBsmtSF Total square feet of basement area
19389, Final model to train and predict
37428, Distribution of no punctuations in tweets font
38152, we get to define a space of multiple configurations
1402, Embarked vs Survived
14259, Converting String values numeric
30366, Test cv2 without conversion
19606, 7 columns have only one value 0 and can be removed
41421, Deploy models on validation set choose the best one
1558, Women and children first goes the famous saying
10531, Outliers
8101, Feature Engineering
3028, We use the kfolds Cross Validation function where K 10 which is the number of holdout sets
17963, Dedicated datatypes for strings
9399, Once finished or even if you interrupt the process in the middle you find the best parameters stored in your Trial object
1673, use our logistic regression model to make predictions Remember we are still in our local validation scheme e we do predict Survival but without touching the testing dataset yet
20245, Extract the titles from the names
19527, Performing Reduce
21137, We want to apply hot encoding so we needed to check the number of levels in our data set
2549, Age creating bins as we know young were last to be rescued so lets explore this relation
6988, Number of fireplaces
30630, Correlation analysis
32031, Do the same prediction and replacement also in test X
1427, SVM
36897, Adadelta
31762, Prediction
36758, we can randomly visualize some images from the training set by plottimg them
43133, Fatalities 3 Best Predicted
8121, AdaBoost
2697, start by imputing features with less than five missing values
39765, Here I also use a CountVectorizer and a TruncatedSVD with 9 components to identify the nine main topics of insincere questions but with the parameter ngram range set at for the CountVectorizer
38630, use 10 of the train data as validation for early stopping
40443, MSSubClass
23316, Fix category
22139, TotalBsmtSF Total square feet of basement area
11642, Artificial Neural Network
28926, Sample
23290, All
41212, We are pretty close to the final prediction If we now apply Naive Bayes we can get the final prediction by multiplying these 201 terms as following
39263, Import data
40652, use validation fold models to predict on test set
23819, separate dataframe to study only categorical features and there mutual relationship and also the one with target column y
3007, strong Isolation Forest strong font div
32682, MultiClass Classification
31733, One Hot Encoding
20694, How to Use Advanced Model Features
39403, Binary Features Exploration
5415, Without a doubt the proportion of women survived are much higher than that of men
15870, Cross validation
31513, Class distribution
35553, Bagging
6679, Gradient Boosting
43269, Instanciando o objeto rf a partir da classe RandomForestRegressor
8930, Masonry veneer Features
28508, This is the code for GUI application that i have made for predicting numbers
16239, The fourth classification method is DecisionTreeClassifier
645, The 1 means that our boundaries are slightly shifted in terms of the real Fare
26090, Building new model and using batch normalization
9847, Continuous variables
41239, Ridge and SVM classifier for text data
38462, create a method to process any input string
15994, Searching the best params for Logistic Regression
29886, Showing Confusion Matrices
12171, Feature Engineering
8851, Submission
13870, We split the dataframe to get our original passenger data from the training set and the test set
29023, How about a nice boxplot for our numerics
1767, Do the same for test data
7534, Most of the people paid 0 80 Fare Fare varies based on Pclass and Survival Survived people paid higher fare than people who died we need to utilise fare column Since Fare as an integer column not be usefull Lets make it Categorical
13214, Evaluating our model with ROC curve and AUC
12528, Before doing it dig deeply into the data description
42820, Model
4690, Ordered
5256, Forecasting Model Experiments
36791, Secondly we try the Porter Stemmer
27342, Predicting labels and saving in csv file
19143, Model 3 with AdamOptimizer
40440, Training The Model
28512, Look at correations with SalePrice
8437, Kitchen Quality Miss Values Treatment
18634, the first holder date starts from January 1995
28815, Loading our Model
7335, Encoding sex feamle 0 and male 1
2362, Probability Predictions
15751, Clean Data
23315, Load data
10619, Looking at outliers in Class 1 it is obvious that mean of Fare is highly affected with these values
17704, DROP THE UNNECESSARY COLUMNS
32029, We predict age data for all rows in train X We feed all columns except Age as input
1076, Pre processing
35185, Correlation Analysis
23534, Please upvote if you like
36618, Using Sci kit Learn library
9896, give a trashold value for family size
8044, Cabin
25046, Aisle Reorder ratio
2469, Feature Importance
24111, Creating Model
13830, Title
5167, FE based on the my kernel Titanic Comparison popular models comparison popular models
40952, Correlation Heatmap of the Second Level Training set
3031, we use 10 fold stacking we first split the training data into 10 folds
12180, There is high percentage of values missing in the age feature so it makes no sense to fill these gaps with the same value as we did before
6025, We have skewed target so we need to transofmation
23085, Train your model with your trained dataset
27777, Using External Data From Keras Datasets to use that data as training data
20453, Go to top font
7404, take a look at our response variable SalePrice
25507, IMPORTING MODULES
11763, Stacking Models
20741, GarageType column
12817, Out of 342 survived travellers there are 233 female and 109 male
9120, If we took the log of the LotArea would this make its distribution more normal
15290, Creating categories based on Fare of passangers
17640, As some algorithms such as KNN SVM are sensitive to the scaling of the data here we also apply standard scaling to the data
38662, Passenger Class
34641, Multinomial NaiveBayes Classifier
18343, There is an order present in variable values Excellent Average Typical Fair Poor these were incoded to hold an order with the best group having highest number
15095, Stacked Model Learning Curve
35478, Calculating Area and Parameter of cancerous part of cell
30902, deal with regionidzip first
2638, This tells 3 value occurs 491 times 1 value occurs 207 times etc
14448, go to top of section engr2
26182, Feature Engineering
2751, Another way to fill categorical values is to use ffill or bfill
5544, Create Hyper tuned model
4635, Groupby and aggregation
3531, With qualitative variables we can check distribution of SalePrice with respect to variable values and enumerate them
2297, When is it an array When is it a Dataframe note the difference vs
22017, The most important feature for XGBoost is var According to a Kaggle form post customer satisfaction forums t data dictionary post
18717, After eyeballing the graph let s choose a maximum learning rate
17889, Mapping rare titles to rare
3456, To tune the parameters of the tree model we ll use a grid search
6449, Hyper Tunnning
19046, We can now explore the distribution of the data
11385, Engine s status analysis
13528, Manual FE
22691, Train and predict
30465, Confusion Matrix
36851, print Classification Report and Accuracy
22490, Bonus6 Chord diagram in Python
35751, Some ML algorithms need scaled data such as those based on gradient boosting
12373, Separating categorical and continuous data fields
27928, Before going on to form the data pipeline I ll have a look at some of the images in the dataset and visualise them with their labels
29701, lets take them through one of the kernels in first maxpooling layer
5590, Factorized 2 of the column whic are Sex and Embarked
42454, Handling imbalanced classes
8563, Finding Unique Values
2492, Observations
6979, Special cases
23408, Keras Model Starts Here
42656, In order to understand better the predictive power of single features we compare the univariate distributions of the most important features
13065, Before we try various models the data needs some additional pre processing Specifically we should covert categorical features to numerical values Sex Embarked
4553, BsmtQual BsmtCond BsmtExposure BsmtFinType1 and BsmtFinType2 For all these categorical basement related features NaN means that there is no basement
19530, Creating RDD from File
14637, work on filling the missing values for Age
10949, First five variables
17941, Name
5861, TODO Intro for K NN model with PCA
3165, Almost done with the data preparation but we need to express SalePrice as a logarithm since that s what the official error metric uses
27111, We have more missing values in test dataset than train dataset
6109, We fill missing LotFrontage values with square root of LotArea
7000, Unfinished square feet of basement area
32327, log error
29603, Examining the Model
14495, going for KNN
5930, Missing value function
36242, First sale There are multiple items sale at first time which shfit features are not covered The mean features group by category type subtype shop and city are created
19009, Plot feature importance
7233, Ridge Regression
9178, It is confusing to me that the minimum of this is 2
5057, On average the properties were 37 years old at the time of sale
11703, Scenario 1 Weighted Samples
15418, let s have a look if fares are a good predictor of survival rates
37769, To demonstrate the capabilities of this feature let us define method which evaluates pi using random generated data points and then look for ways to optimize
14711, Fitting here is a bit more involved then usual
32222, Looks like a lot of zeros
15628, Embarked categories
42113, That s pretty good just two classes but the positive class makes just over 6 of the total
32561, Diff Source
14285, Model evaluation with tuned parameters using cross validation
15321, Lets work out with the Cabin numbers
16044, Getting the data
23018, Event Pattern Analysis
18700, we ll use fastai s ImageCleaner Jupyter widget to re label delete images which are mislabeled noisy irrelevant
37156, Below I have extracted out the top 8 layer outputs of the model
40694, now the demand is highest for bins 6 and 7 which is about tempearure 30 35 bin 6 and 35 40 bin 7
37985, However trying too many combinations might explode kaggle kernel
14808, Find Missing Value
40859, Creating New Features
20220, Plot the validation loss and training loss
21511, Playing with brightness and constrast
35771, xgb reference
18533, Investigate the target variable SalePrice
37654, Visualize accuracies and losses
30698, Treinar um Modelo de Autoencoder
6260, Both males and females have decreasing chances of survival the lower class they were in
5953, Transforming Sex
35208, We started checking with a forest model made of 300 trees
22340, Corpus
8979, Given our new features we can drop Exterior1st and Exterior2nd
40623, Add a list of columns to be dropped and id columns to be used by our data processing pipeline
15716, It looks similar to the train dataset and includes all same columns except Survived which we need to predict
39875, Neighborhood
20817, Feature Space
32249, WELL WELL WELL
16334, Logistic Regression
7947, Check the model loss score here evolution RMSE
11391, Looks like first class passengers are older than the second and first class passengers
8949, Fixing Functional
35739, Do some PCA for the dataset to remove some of the collinearity
36699, Defining the model
7496, Use different models to predict y
22003, Run the next code cell to get the MAE for this approach
27981, Imbalanced dataset is relevant primarily in the context of supervised machine learning involving two or more classes
1251, The SalePrice is now normally distributed excellent
38478, Convolution
4660, The target variable is right skewed As linear models love normally distributed data
31048, Length of characters
43344, from here our Neural Network part starts
7451, Distribution of survival rate class wise
1204, Gridsearching gave me optimal parameters for XGBoost
13299, It s solution generate tuned DecisionTreeClassifier by the GridSearchCV from kernels
20905, Plot images
25285, we can figure out what ideal learning rates are
9326, Let a model help you
38228, The second diagram presents the average number of incidents per hour for five of the crimes categories
36724, Define train function
660, Perceptron
9011, There are 3 rows in the testing set which have null PoolQC but contain a pool
33083, check if it worked
19314, Evaluation prediction and analysis
1122, According to the Kaggle data dictionary both SibSp and Parch relate to traveling with family
522, Seaborn Distplots
37751, that we got the random sample rows let us fetch them from the csv file
32646, Along with traditional libraries imported for tensor manipulation mathematical operations and graphics development specific machine learning modules are used in this exercise regressors ElasticNet LassoCV RidgeCV GradientBoostingRegressor SVR XGBoost StackingCVRegressor cross validation engines a scaler and metrics methods Comments on the choice of regressors are provided in Section p
14585, Embarked Missing info
7989, Work With Labels
9373, count the house which have no basement in train test
34541, Below is the plot of occurences of n grams in the duplicate and non duplicate sentences
18112, Quick EDA
36125, Categorical features are now encoded and we concat categorical and numerical features and make final clean prepared dataset
29810, Displaying FastText WordVector of given word
36249, very simple term frequency and document frequency extractor
13478, Our Logistic Regression is effective and easy to interpret but there are other ML techniques which could provide a more accurate prediction
17447, Check the Values
40465, Garage
38053, We can find the group of a var using the following functions
113, calculated fare
34697, Linear extrapolation to the next month
13953, Fill Missing Value
33092, Lasso regressor
19253, Overall daily sales
10677, most of the features are correlated with each other like Garage Cars and Garage Area
43307, Split training data into train and validation sets
32985, Decision tree regression
6561, Survival rate vs Siblings or Spouse on Board
39343, Co variance Matrix of standarized data
37207, Final Training and Prediction
23605, finally for the similarity
37371, K Nearest Neighbor
42427, Max Floor Vs Price Doc
8604, Null in test data
1220, Uniqe
19929, check for null and missing values
6287, we take the training dataset and target column that we have just built and we create samples of this
31231, Features with max value between 10 20 and min values between 10 20
18370, Transforming Demand to Normal Distribution
29320, Predicting using VGG16
34483, When using tensorflow as a backend with keras the keras CNN s require a 4D array or tensor with the following shape
21464, The earlier layers have more general purpose features
42987, Word Clouds generated42995 from non duplicate pair question s text
132, Using Cross validation
11117, Fit and Evaluate Random Forest Model
31901, To verify that the data is in the correct format and that you re ready to build and train the network let s display the first 25 images from the training set and display the class name below each image
40444, LotFrontage
22154, Besides that with a simple linear transformation we could supperpose two distributions making them more readable to our human eyes Take a look at var and var before and after we transform var
39842, Visualizing the sunspot in 2019
24575, For Google Colab
22712, Training different PCA Models
17015, Ticket
981, whole thing
11183, Compare Train to Test data verify the distributions look similar maybe add probablity plots per feature with train and test on same chart
17593, we ll merge our data
7022, Quality of second finished area
5476, Check to make sure the addition of the components equal the prediction below
18598, Another trick we ve used here is adding the cycle mult parameter
2394, KNNImputer
37814, Target Distribution
30940, Visuallizing Interest Level Vs Bathroom
24392, Modeling
7426, Identify the most important features with names on the first 25 principle components
38061, We have to clean some special words inside the cleaning text process
11076, Out of fold predictions
34796, Lasso Regression
15971, Parch Feature
12314, Outlier removal
34635, e are going to convert cateforical data to categorical columns
28246, you can also check missing values like this without need of function
1002, This looks much better Now we convert the categories to numeric ones to feed them into our model
18819, This model needs improvement run cross validate on it
6006, Data Numeric
12643, Handling Categorical Text Data
17009, Maximum age is 80 and minimum age is 0
38304, To handle null values we take a random value between minimum and maximum value of each column and use that random value for imputation
2991, Missing Data Assessment
1547, Creating Submission File
30586, Counts of Bureau Dataframe
27904, MSSubClass LotFrontAge OverallQual OverallConditions are looking like a categorical value
12986, Simple Logistic Regression Model
7501, Proceed with rest of data using jointplots
40544, LinearDiscriminantAnalysis
20684, Functional Model API Advanced
41869, XGBoost using Scikit Learn API
10936, Numerical distributions before Data imputaion
6403, Linear Regression
37546, KERAS MODEL
17756, Training Final Classifier
7499, OverallQual and Fireplaces look fine since the better the quality and number of fireplaces the more expensive the house
3260, strong Identifying the total percentage of the missing values in both the data set exculding the target variable p strong
36888, dense 3
669, Ada Boost
27355, Starting with a base model
11474, Comparing Models
4292, Create an intercept
575, submission scores
22627, Preparing Test Data for Prediction
26822, check now the distribution of the mean values per columns in the train and test datasets
41734, Load the pretrained embeddings
6175, The order of passengers are highest for 3rd class then 1st class and then came 2nd class at the lowest
7556, Gradient Boost
18545, The training part contains information about 891 passengers described by 12 variables including 1 target variable
21608, Accesing the groups of a groupby object get group
36766, Data augmentation is a really powerful technique
42809, Features importance
21426, BsmtFinSF1
29046, Subtracting the median blur image from the original
8821, for these rare title we ll convert them to Others except Mme be converted to Mrs Ms and Mlle to Miss
39884, Prediction with Linear Model Ridge
8392, Seting the type of some categorical features
19421, Few more things to notice
4271, Electrical
25007, Adding Target Lags
32786, let s run the frequency encoding function
24915, Well we have to appreaciate India in maintaining constant cases
22875, Prediciting the Outputs
7470, Loading the csv files
12777, Machine Learing
28664, LotConfig
1918, All missing value indicate that particular house doesn t have an alley access
7543, lets convert categorical values into dummy variable and Scaling
4969, Title
40262, 1st Floor Surface Area
28151, Well this is not exactly what we were hoping for
30608, Test One
5032, Import libraries and set globals
38250, Correlation Analysis
6392, Columns
33681, Difference Hours font
39104, Applying Latent Dirichlet Allocation LDA models
163, Our second step is to check for missing values
30939, Visualizing Interest Level Vs Price
7794, not log the data since a neural network is quite good at working with non linear data
3817, sample mean population mean standard error
18577, Fare
6231, Final feature selection
14362, Most of the Passengers came without any sibling or spouse
6078, Age
25813, Output
734, I chose a p value of less than 0
37293, Exploring Target Column
22144, Data Preprocessing
3850, Feature Fare
22289, Identify categorical variables
14727, Normalize the Data
34858, Prediction
18831, XGBoost
14836, Pclass
1136, Model evaluation based on simple train test split using train test split function
40404, Training using Stratified GroupKFold
15144, We can check the correlation between Ticket number length and Sex
2108, Feature engineering and Feature selection
15622, Deck feature
405, XGB
13864, we extract the numeric portion of the ticket We create extra features such as
2733, Loading the libraries and data
24338, Average base models according to their weights
4889, Did you recognize something yes We can get the alphabets first letter by running regular expression
33348, Question 1 Create a plot with the moving average of total sales 7 days and the variation on the second axis
21399, Distribution of target variable
12841, Random Forest
15484, Test Against Cross Validation Set and Test Set and Analyse Performance Metrics F1 Score
10579, Evaluating ROC metric
7907, th model Gradient Boosting Classifier
9697, Observations
36012, Vocab
34654, Renaming and merging some of the types
8985, This may be a useful feature so I am going to keep it
6223, for categorical columns filling NaN with most frequent value of the column
5374, Embedded Methods
8359, FEATURE SELECTION AND ENGINEERING
5228, ICA
7603, sklearn pipeline Pipeline
4039, now look at the SalePrice variation on different categories of categorical variables columns
31003, Lets use XGBoost to assess importance
26721, For CA
17848, n estimators number of trees in the foreset
119, age group
22521, Split the full sample into train test 80 20
26557, Compile the Model
7781, value counts with default parameters
4737, Missing value of each columns
15706, Fare vs Survived
40287, Prediction
35670, Ordinal features
38743, Exploratory Data Analysis
14184, Treating missing values
5248, Setting Model Data and Log Transforming the Target
21256, Get Ratings By Sale Count
22324, Convert to Lower Case
3679, Sales price corr with new features
16433, DecisionTreeRegressor
2495, The chances for survival for Port C is highest around 0
5034, We have 81 columns SalePrice is the target variable that we want to predict Id is just an index that we can ignore we have 79 features to predict from
7837, You already know and use mean imputing mode imputing
12517, Checking the NaN values
14078, Pclass
37337, Save model weights and load model weights
22903, Most frequent words and bigrams
24514, We can now create a callback which can be passed to the Learner
26183, In this notebook I am just going to scale the data not making any new columns
14368, Analyzing Feature Fare
22841, Viola That worked
8452, Identify and treat multicollinearity
31227, var 12 var 15 var 25 var 34 var 43 var 108 var 125 have very low range of values further elaborated by the histogram below
8570, Grouping Rows by Values
19600, Last Preparation
2355, Support Vector Classifier
38111, The last 28 days are used for validation
33861, Machine Learning models
798, use our Sex and Title features as an example and calculate how much each split decrease the overall weighted Gini Impurity
41481, Drop the following columns
7912, We re gonna remove the principal outliers in the scatter plots of GrLivRea GarageArea TotalBsmtSF 1stFlrSF MasVnrArea TotRmsAbvGrd vs SalePrice TODO
9002, I want to create a columns that tells what type of Tier neigborhood the house is located in
18983, Display more than one plot of different types and arrange by row and columns
12264, Scikit learn implementation
16584, Fill missing Age
13382, Outlier Detection
36434, See which directories have you got
20926, reshaping into image shape images vertical height horizontal width colors
26659, Try to do the same model in Scikit learn
15975, With this feature happens the same that with Age feature
18518, Reshape
34850, Feature Importance
17921, The columns SibSp Parch Sex Embarked and Pclass contain categorical data Machine Learning Algorithms cannot process categorical data So one hot encoding is applied to these columns in order to convert the data into numbers One hot encoding is done below using pandas get dummies it cannot process
16924, eXtreme Gradient Boosting XGBoost
3595, Lasso
34687, Adding all the zero values
42953, Fare Feature
5395, Here I put training set and testing set together so that I can do preprocess at the same time and after data imputation I copy a set of training set so that I can do EDA with it
42347, We first have to transform the dataset into the ideal form in order to make XGboost running
14467, go to top of document top
453, Cross Validation
21664, Add a prefix or suffix to all columns
32012, Convert Categorical Features
29682, Data normalization in case of CNN and images helps because it makes convergence faster
30765, Score feature removal for different thresholds
14858, Since from the EDA I remember that we have missing values in both train and test data and multiple categorical variables to deal with I decided to use pipelines to simplify all the work
14186, Feature Engineering
17842, We check the features importance
11294, Label encoding for categorical variables with ordered values ordinal
4745, Numerical Features
43132, Confirmed Cases 3 Worst Predicted
22868, Pooling Layer
32503, Model 2 Using Transfer Learning Extracted VGG 19 features
16150, filling missing values
561, submission for random forest
42760, Fare Distribution
12279, For the evaluation metric of feature importance I used MSE of pertutaed data MSE of original data MSE of original data
33101, Stacking models
39883, Importance
34281, view distribution of continuous features using boxplot
18311, Mean Encoding for shop id
4429, Features that have too low variance can negatively impact the model so we need to remove them by the number of repetitive equal values
22499, Lets start import tensorflow
40662, in just base implementations XGBoost is the winner
19293, Data Generator
33838, Matrix
14670, KNN
10541, Merging numerical and categorical data
28064, The basic idea for age imputation is to take the title of the people from the name column and impute with the average age of the group of people with that title
3263, strong strong strong Visualising the missing values along with there respective coulmns p strong
8515, Creating Dummy Variables
9623, Model Building
40391, Defining our Learning Rate Scheduler function
38963, In this section we do the actual training comining the previous defined functions to a full pipeline
12989, Prediction
9739, Throughout this notebook I pretend that testing set is never exists until the model is trained to simulates the real life scenario where the data to be predicted comes later
26684, convert categorical using LabelEncoder
26726, Plotting monthly sales time series across the 3 states
1189, Creating Dummies
643, remind ourselves of the distribution of Fare with respect to Pclass
29700, lets take them through one of the kernels in next convolutional layer
12239, If statements
10902, Grid Search for SVM
30392, Targets
23486, TFIDF can be generated at word character or even N gram level
13089, Since PassengerId Name and Ticket columns do not provide any relevant information in predicting the survival of a passenger we can delete the columns
3016, These visualisation helps determine which values need to be imputed We impute them by proceeding sequentially through features with missing values
27922, Find the tree size where mean absolute error is minimum
26800, Confusion matrix
4978, Exploring Tickets
13509, Curve
19297, Data statistics
7377, To deal with the duplicates I define the function dupl drop that removes duplicates both in the columns PassengerId and WikiId
2115, Since we already have GarageCars to describe the Garage and this new feature is very correlated with the basement SF we could consider if it is better to use it and drop the original two
43037, CNN Model
8998, I am just interested in the values of the features that have 90 of the same value
4506, Magnifying Further
26574, Setting up the data pipeline
24414, Housing Internal Characteristics
2916, I print a heatmap with the correlation coefficients between the features
40033, take a look at some example images and their augmented couterparts in train
41994, iloc To read a single row
36943, Title IsMarried
26078, Split data
12383, Removing outliers from continuous data fields
20490, Contract status
33296, Ticket Cleaner
18084, Plot the darkest images
10352, We use lasso regression
8841, Prepare categorical
41414, Basic modelling LGB
2735, Taking a first look at our data gives us a rough idea about the variables and the kind of data it holds
15411, leave the missing Cabin values unfilled and let s separate out the training and test data again
18247, Identify the missing values
13745, XGBoost
37002, check the total number of unique customers in the three datasets
16502, Linear Support Vector Machine
26639, Test data prediction
32540, Extra Analysis
9205, Gender Distribution
36529, I looked at every pair of images here and only the last pair was a pair of images different in an important way
36466, Images from ARVALIS Plant Institute 2
19256, Sub sample train set to get only the last year of data and reduce training time
36448, Modeling
14787, Embarked
26955, Check missing value from data
6702, EDA is a way of Visualizing Summarizing and interpreting the information that is hidden in rows and column format
10615, Embarked column in training data
11679, There are two numerical variables that have missing values namely Age and Fare columns
13050, Cross Validation Strategy
28649, Fence
25386, preprocess of data
11126, Prepare Submission file
22442, Pairplot
35828, Feature Engnieering
16969, Outliers detection
2922, K NearestNeighbors
804, Settings and switches
43322, Normalizing Images
11255, The train data now be split into two parts which we call training and validation
35105, Creating multilabels
39965, Categorical Encoding
34482, Original version takes Gb of memory
35886, Ordinal encoding of remaining categoricals
17947, Encode Age
24935, Feature transformations
650, We might even be at a stage now where we can investigate the few outliers more in detail
4255, There are 19 columns with nuls in the training set
10247, Go to Contents Menu
19246, Albumentations
35207, Random Forest
19956, The first letter of the cabin indicates the Desk i choosed to keep this information only since it indicates the probable location of the passenger in the Titanic
18605, tfms stands for transformations tfms from model takes care of resizing image cropping initial normalization creating data with mean stdev of 0 1 and more
15230, LGBM Model
42519, Reshape To Match The Keras s Expectations
22002, Drop columns with categorical data
15344, GRADIENT BOOSTING
22718, Creating the labels matrix for the plot
12770, It is time to make submission
20295, Correlation Between The Features
12604, use chi square test to understand relationship between categorical variables and target variable
1257, Feature transformations
13480, try another method a decision tree
14556, Heatmap font
14482, Correlation Analysis
16372, Exploring Fare
38567, Dimensionality Reduction Techniques
31667, Evaluation of model with the best classifier
41774, Show network
35467, Visualiza the skin cancer seborrheic keratosis
22090, Build Train and Test Data loaders
27249, Obtain the training set
6030, Check Skewness and fit transormations if needed
13901, Graph on passenger survival Pclass wise
30273, hr
21440, Overall Rating
36494, Save folds
23209, Findings Boosting method can t beat best boosting base learner gbc Though it could beat if we would have optimized xgbc If you have time and infrastructure you can tune xgbc s hyperparameters compare boosting accuracy with its base models accuracy
3923, Decision Tree Classifier
3713, LonFrontage
17256, Looking into Training and Testing Data
33098, XGBoost regressor
3887, Mean
8929, Basement Features
38175, Pycaret needs an input to be entered after the next command and since kaggle doesnt support commenting it out
6919, Learning Validation Curves Jtrain vs Jcv to help identify under or overfitting
4788, Handling missing values
12432, Using regex
24321, I want to do a long list of value mapping
34664, Around 90 of all daily sales include only one item
19712, We have equal number of cats and dog photos to train from
38635, the pooling layer
7959, Submission
6833, Feature Interactions
6444, Numerical variable which are actually categorical
16234, we perform transform function which is known as transformer A Transformer is an abstraction that includes feature transformers and learned models Technically a Transformer implements a method transform which converts one DataFrame into another generally by appending one or more columns
21128, Now we look at categorical variables For this the idea be very similar so starting from NaN s
20758, SaleCondtion column
28362, Importing Libraries
29193, Extreme Gradient Boosting
36417, Correlation Matrix
30708, now do the same with COCO bounding box format
33020, As we know the images come in a grayscale format where all the values are between a good thing you should do is standarize the data which makes it easier for the model to converge
19185, We know which features are the most significant for our model so we check the distribution of those features with respect to the target variable in bar plot scatter plot with linear fit and finally box plots to know the statistics
6169, Embarked
41830, Visualization of learning process for single model
35468, Visualiza the skin cancer lentigo NOS
4869, Label Encoding of some categorical features
38851, Feature Engineering
9900, Pclass
17660, Observations
8622, GarageCars vs SalePrice
24657, I use only new cases and new death s dynamics to make the prediction
30597, At this point we save both the training and testing data
30997, Feature primitives
5025, Ordinary Least Squares OLS Linear Regression No Regularization
14164, we can work out some meaningful interpretation from the Cabin column
27179, Similarly we can tune other parameters for better accuracy
37502, the correlation in between year sold and sold price is not that much variate
26409, Most passengers have neither children nor parents on board
28332, Types Of Features
4851, XGBoost
751, Before training let s perform min max scaling
37802, Regression Evaluation Metrics
35592, Freezing Layers
13882, This is the information we have in the training data set
17915, SCATTERPLOT ANALYSIS OF PASSENGERS AGES IN EACH CLASS
16576, If you are a begginer you can leave this portion of creating FamilySurv and come back later when start unserstanding
34540, n grams are the continuous sequence of words They can be single words if n is equal to 1 or continuous sequence of two words if n is equal to 2 and so on
36375, In percentage
24542, Again let s exclude the dominant product
16395, docs version indexing htmlboolean indexing
39344, Findind top 2 eigen value and corresponding eigen vectors for projection in 2 D
36116, Lets fill the missing values
19156, Feature preselection
38038, Light Gradient Boosting Method
28268, NorthRidge Northridge Heights comprise of moderate to hmost expensive houses
24805, explore data
34109, Hospital beds for covid patients across state
14899, Visualize Age Fare Family size and Survived
13688, Embarked I decided to drop the passengers who didn t embark since modeling based on their data would act like noise in my opinion I feel that they can t reliably tell us about the survived not survived ouptut
34288, normalize data with min max
10870, 3D visualization
35485, CutMix data augmentation
32357, Tweaking threshold
10263, Restore training and test sets
6807, Logistic regression is the hello world of machine learning algorithms It is very simple to understand how it works here logistic regression 9b02c2aec102 is a good article which cover theory of this algorithm
31237, Before concluding let s do a check of whether feature values in test and train comes from the same sampling distribution
17591, We use heatmap for plotting the missimg values
25576, GrLiveArea
16961, Feature engineering
32331, Univariate Analysis
32593, We can also find the best scores for plotting the best hyperparameter values
58, Making the dataset ready for the model
18494, Developing The Model Define a Performance Metric
25802, isolate the entries relative to these 100 managers with the interest level column as well
21636, Access numpy within pandas without importing numpy as np
26472, Logistic Regression
14250, The chances for survival for Port C is highest around 0
4653, Frequency Distribution
25994, Creating a SparkSession
19391, Extract the ordered products in each order
31264, Sample sales snippets
11629, Evaluate the model
39208, Research Question Is there a significant difference between the length of text of real and not real tweet
38275, train our model with our padded data pad docs and label with epochs and batch size
27943, Extract features and make predictions on the test data
28940, We can even infer that the passengers who embarked at Southampton had a lower chance of survival
14609, Linear Discrimination
36728, Actually I noticed the discussion about
35063, Complexity graph of Solution 3
3681, Explore the Target Variable
38844, Most of the house built area is less than 20K Sq Ft
38055, Using GridSearchCV to find the best parameters for SVM
12387, One hot encoding of all purely categorical data columns
21317, Target varibale l logerror
4022, Garage Columns
20835, We ll be applying this to a subset of columns
32920, Wavenet Model
31793, First let us define the DenseNet backbone
41357, BsmtFinType2 is not useful for us for correlation relation between price
5893, Some Missing values in test data
26507, we add a softmax layer the same one if we use just a simple
15977, We inpute the missing value with the mode
21804, SHAP values
11908, People with destination C Cherbourg survived with highest percentage
35324, Trying to use ensamble method the simplest bagging
23397, Test Images
13272, Statement The boys from the small families Family of the third class cabins who were sitting in Southampton all survived
1903, Submission
39317, XGBRegressor training for predictions
21570, Save memory by fixing your date
10500, Sex
39720, Playing with the trained Word2Vec model
13904, Lets try XGB Classifier to train and predict the data
23559, There are also records with floor values greater than the maximum floors in the building
6289, Ensemble
11042, Score the models with crossvalidation
32126, How to find the count of unique values in a numpy array
15860, The following cell uses Baian hyperparameter optimization to search for the best parameters for a Random Forest model
11670, It looks like females are more most likely to survive than male
13093, Cramer s V is more appropriate than Pearson correlation to find correlation between two nominal variables
28329, identifying the categerical and numerical Variable
7814, NA value
1152, Model EvaluatioF
15585, On the leaderboard this scores 0
29091, Comparison to the Original weights
23932, We can then replace the pre trained head
16134, Final Predictions for Competition
95, Combined Feature Relations
41267, Another way is to use gridspec
23173, Encoding Categorical Variables
25794, We can create Band for Age and Fare let s create it
22351, XGBoost
20833, Same process for Promo dates
4018, MasVnrType and MasVnrArea Imputation
12266, Download datasets
29220, Thus we are going to use 9 Principle components to preserve 96
31933, visualize some of the flase predictions to try to get more of an understanding of the model s misclassifications
9431, Boxen Plot
37436, Wordcloud for selected text font
10454, LightGBM
23695, Fine tunning
11011, We are going to create a feature called Salutation depending on the title the passengers had in their name
38490, Errors font
33200, Defining a Small Convnet
1877, Nice No more missing values
21450, Categorical Numeric
11348, Logistic Regression
25169, Simple fuction to perform stemming while removing StopWords
11637, Random forrest
23508, There are 17 elements in the equivalence class
14111, center CatPlot center
9965, Distribution of Sale Price
21251, Model Building
9808, Missingno library offers a very nice way to visualize the distribution of NaN values Missingno is a Python library and compatible with Pandas
8009, Train Random Forest Regression
15693, Gender vs Survived
43020, RandomForest
33834, MEDIAN Suitable for continuous data with outliers
21200, Merge all functions into a model
17530, Create Age bads
23236, Find the best parameter value
5677, Parch and SibSp
33486, Linear Regression for one country
30337, We create the actual validation object
1520, We base this part of the exploration on the
25950, Feature Selection
1588, We can also make feature transformation For example we could transform the Age feature in order to simplify it We could distinguish the youngsters age less than 18 years from the adults
15055, Prepare Training
8145, Feature Engineering
8109, Fare
7715, Outliers affect the mean value of the data but have little effect on the median or mode of a given set of data
26991, Starting point
41188, let s print missing value counts for numerical and categorical columns for merged data
22835, Well then this means there are certain items that are present in the test set but completely absent in the training set Can we put a number to this to get an intuition
16137, import python lib for visualization
1951, We remove the columns which have more than 15 of missing data
23717, LotArea is highly skewed variable having skewness values 10
42619, Making Predictions
1102, Create datasets
7683, We do the same thing with SalePrice Target values column we localize those outliers and make sure they are the right outliers to remove
18342, DATATYPE CORRECTION
20682, Sequential Model API Simple
4259, Non null unique values
23631, get the validation sample predictions and also get the best threshold for F1 score
23446, Season
5205, now check again the dimentions of our training set after engineering the catagorical features using the get dummies function
11817, SalePrice and Overall Condition
32310, Relation between Survival and Passenger Class
5295, I can split the data into training set and test set
27126, Garage
18551, The mean age of survived passenger is 28
26114, We can drop PoolQC MiscFeature Alley and Fence features because they have more than 80 of missing values
19163, Brand name label features
38463, Custom testcases
29800, Fetch most similar words wrt any given word
1798, SalePrice vs 1stFlrSF
35164, Compile 10 times and get statistics
32083, Check for Missing Values
40098, Add missing income
33268, Create submission file
28726, Descriptions of the top 5k most exepensive sales
11120, Plot Number of Features vs Model Performance
41761, You can actually follow along as the search goes on
18502, Naive Submission
1380, to Finish our exploration I create a new column to with familiees size
27927, Cross Validation
4132, Label Encoding Categorical Data
23201, Looking at this plot one thing we can say that a linear decision boundary not be a good choice to separate these two classes we would train our models on this 2d transformed samples to visualize decision regions created by them
36777, Here format sentence changes a piece of text in this case a tweet into a dictionary of words mapped to True booleans
6977, concatenate train and test sets into one so we can analyze everything and fill NaN s based on all dataset
41593, priors Has multiple instances of the orders that is each product in an order is a separate row
42784, Training the model
1868, Refit with Important Interactions
2931, Plot Learning Curves of standart Algorithms
42880, Time plot
5833, Firstly lets convert string type values of object features to categorical values
24352, In order to make the optimizer converge faster and closest to the global minimum of the loss function i used an annealing method of the learning rate
6, focus on some of the categorical features with major count of missing values and replace them with None
42686, I want to draw two count bar plot for each object and int features
28501, Each image is 28 pixels in height and 28 pixels in width for a total of 784 pixels in total
10424, XGB
23418, we can predict the test values and save them
3766, Magic Weapon 1 Hyperparameter Tunning
3633, Age
17267, Data Visualization
38073, Stratificcation
32464, Models Data Prep
40303, Leaky Features Exploration
29732, let s the the key space we are going to optimize
6517, Categorial Variables
26690, AMT REQ CREDIT BUREAU TOTAL the total number of enquiries
5929, coordinates of point on scatter plot seaborn
43266, Importa fun o draw tree que serve para visualisarmos a rvore de decis o
20713, Alley column
41572, Making the functions to get the training and validation set from the Images
1531, Cabin Feature
12468, In general barplot is used for categorical variables while histogram density and boxplot are used for numerical data
7077, All the Title belongs to one kind of gender except for Dr
23474, Reading the test file
26978, Load libraries
23690, Take a look at some pictures is everything alright
41530, The digits in this dataset are more unifrom in those of the kannada dataset
14743, we have a 6 dimensional DataFrame and our KNN classifier should perform better
18338, This section i am going to zoom at variables which are corelated to target variable SalePrice
41242, Tensorflow model
19180, EDA on predictors
37568, Integer Columns Analysis
8099, Parch vs Survival
16732, concat datasets
5244, Dropping Unnecessary Features
10403, Confirm column overlap between train and test
9971, I created the area util feature my idea is Lot Area it s the house total area and others it s non raw area so i sum others area s and subtracted from the total area
8856, Calculate Shap values example 1
26418, The passengers have different titels which give information about their sex about their social status or about their age
22022, var36 is most of the times 99 or
33732, Splitting the data into train and validation set
6607, Decision Tree
11964, Prediction span
31857, Producing lags brings a lot of nulls
18519, Label encoding
807, shape info head and describe
36126, Different Models
17936, The more you board in Cherbourg the more likely you are to survive and the more likely you are to board in Southampton the more likely you are to die
40479, Gaussian Naive Bayes
42026, Split strings into numbers and words
36389, When using such encoding method there might be some category values in the test data that are missing from the train data
7954, Tuning on activation function
21235, Set class weights
31564, transform is None
11974, In the next step we deal with the numerical features
3721, PCA Principle component analysis
43402, The real code starts here
21569, Count the missing values
20092, Data Preparation
28668, Access
11620, Additional exploration
2511, Predictive Modeling
9646, Pretty much clustered in the range of 0 1000 GarageArea
20491, Payment type
14783, Ticket
12945, Age Column
3477, Model 2 ElasticNet
14509, Observations
10988, Build Our XGBoost Model
40070, Multi colinearity Numerical
29611, Plot random images with the labels
35927, Create submission file for Kaggle
12382, Box plot of all continuous data fields
40874, Retrain and Predict Using Best Hyperparameters
26086, The number of errors for the each digit
41459, The majority of females survived whereas the majority of males did not
34706, Cumulative shop revenue
39770, I was wondering wether the test dataset was an extract from the train one or completely different
31327, Process Test Data
12915, we are right that Age Cabin and Embarked contain missing values in training set
18313, assuming month of year plays a big role in number of items sold
21385, Fit the model
28263, Discrete features
30995, An EntitySet is a collection of entities and the relationships between them
43082, look to the top most correlated features besides the same feature pairs
5830, After dropping Id and other columns with High no
42687, Sometimes null data itself can be important feature
31897, Fit Model Using All Data
10658, Classification
36550, How do the scatter plots look like
31384, We be first importing the data and creating copies
32908, Model
12471, Outlier Treatment
2124, After this search the best configuration is given by
16106, Fare Feature
39164, We don t want that This would confuse our CNN
17357, Decision tree as a predictive model which maps features to conclusions about the target value
12487, Deployment
20561, simple cleaning the math tags
26933, This method help to obtaining a bag of means by vectorising the messages
12040, I m gonna put 0 in MiscVal for house which don t have any MiscFeature and None value for house with 0 in MiscValue and some value in MiscFeature
23873, Above are some of the most important features
19296, Find best threshold
4811, Exporting our submission dataset to a csv file
26819, Distribution for Standard Deviation pre
42143, Fashion MNIST
28864, The Encoder structure is very similar to an LSTM RNN network
5901, u Feature Importance u
36670, Text Pre processing
966, Second level learning model via XGBoost
5336, Diplay series with high low open and close points
39677, Initialize all the variables we created
34485, Create and Compile the model
36925, When I googled Stone Mrs George Nelson Martha Evelyn I found that she embarked from S Southampton with her maid Amelie Icard in this page Martha Evelyn Stone Titanic Survivor titanica org titanic survivor martha evelyn stone html
32942, Prepare the submission csv
37291, Reading Dataset
23027, Though the item is same sell price is slightly different in each store and each season
24676, Data Normalization
34835, Analysing the correlation between the feature and the sales price and select most correlated feature only
37818, Merge the train test to process everything atonce
34466, Expanding Window Statistics
27235, Calculate importance of each feature to iterate feature engineering
9848, I replace them by 0
23366, Training The Model
27471, Here we are just taking a list of text and combines them into one large chunk of text
257, Checking for number of clusters
36010, Datasets
29868, The images are actually quite big
42466, Checking the correlations between ordinal variables
28395, Null Value Management
8083, Setting Up Models
21639, Formatting different columns of a df using dictionaries
23956, Applying Random Forest Regressor Algorithm
30969, I think we re going to need a better approach Before we discuss alternatives let s walk through how we would actually use this grid and evaluate all the hyperparameters
40990, For example applying a lambda function
30410, LSTM models
8951, Fixing Garage
6859, Understanding Variables
12, Principal Component Analysis PCA
9953, Data Modeling
43038, Compile the Model
16388, Removing Less Important Features
12408, OverallQual
3898, Target Encoding
27293, Intervention by after days for SEIR model
15646, Bagging
31906, EVALUATING THE MODEL
26835, Hourly Rental Change
3011, strong Imputation strong font div
12448, it follows a linear correlation
25437, Split Training and Validation Set
21782, we would transform categorical value to their one hot encoding version
29117, check the the top 6 images with highest losses
9891, We can calculate the probability of people who could survive by looking at their genders
32870, Mean encoding
7110, we can evaluate all the models we just used
7026, Kitchen quality
33999, Right skew
11113, EDA Relation between each feature and saleprice
6885, Distribution of survived 1 is for survived and 0 is for not
35779, Averaged base models class
29974, Applying tokenization process
25750, Here is a function that crops a centered image to look like train size test size 0
23999, K Folds cross validator
24561, Distribution of products by activity index
38270, as we have applied all the cleaning steps so now its time to separate our data back
37096, Missing values
42359, Adding location to this sparse matrix
28426, In our test set we have 5100 sales in really new shop and no outdated shops but anyway it is good feature for future
13571, Ploting Family Size
18547, Check data for NA
29378, Transfer Learning
6270, These age bands look like suitable bins for natural categorisation
20321, the Python gods are really not happy with me for that hacky solution
23613, Work with missing values
14479, now we focus on which group from each Passenger class survived more and less
1172, Zoning is also interesting
31813, Get new image size and augment the image
150, Extra Trees Classifier
11016, We are going to guess missing values in Age feature using other features like Pclass and Gender
4379, Observing Graph plot there are such home exists with 3 fireplaces and there SalesPrice not much
6160, Remove the features which give us less then 0
7824, We have two models to fit the data
40142, Create submission output
15493, Hidden neurons
8704, LabelEncode the Categorical Features
14580, PassengerId Name and Ticket not play any role in Titanic survival chances
30932, How about longest questions
35359, Raw
18159, checking the number of row and columns present in the input file
20032, We would like to remove some of this redundancy
8839, Extract target variable
30340, These are the parameters that we are going to input to the previous function
14553, SibSp Siblings Spouses font
13205, let s separate our data for train and test
4888, corr Fare Pclass is the highest correlation in absolute numbers for Fare so we ll use Pclass again to impute the missing values
23804, Modelling with Generalized Additive Model GAM Whitebox Model
10511, create a New Attribute Alone which would be True if he she is travelling Alone
8112, MACHINE LEARNING
16469, Here I m not using HPT for LDA
19962, Hyperparameter tunning for best models
40253, Our Q Q Plot tells us that there is single outlier over 4000 which is causing major problems to the distribution
42399, Is there seasonality to the sales
16881, NewFeature FamilySize
13744, Comparison
18021, New dataframe Woman Child Group by Ticket
22153, WHAT I WOULD LIKE TO HAVE DONE IF I HAD MORE TIME
20288, Embarked
26783, How many combinations of GPS latitude and longitude
20678, Fit the Model
20927, Model
24262, Correlation analysis with histograms and pivot tables
41437, Pre processing tokenization
18692, After eyeballing the graph let s choose a maximum learning rate
26871, In the first layer we have 60 parameters
30368, Test SKImage
15312, Lets find out the percentage of Women and Men
25960, Top Selling Product Departments
23510, There are 2 elements in the class
29546, The main words are articles
5053, look at the time related features like building year year and month of sale
26073, Importing
27169, It is important to convert the categorical text data into model understandable numerical data
7214, Plotting some graphs for insights
38019, At its best what kind of predictions do the network trying to make
32812, Level 3 XGB
33857, Visualization using t SNE
36431, Model Training
8027, Let replace missing value with a variable X which means it s Unknown
5855, The two outliers we wish to remove are those with very low SalePrice relative to their GrLivArea square footage that are inconsistent with the trend of the rest of the data
38058, Checking target distribution
13854, Feature engineering
11819, We have Outliers in our data
22287, Take a look at your submission object now by calling
11752, we are going to take care of the skew that we noticed in some of our predictor variables earlier
35572, Lets look at a lot of different items
17878, Outlier Treatment
15391, Have a look at the test data
20357, This submission gets on the public leaderboard
22411, for gross income aka renta
8277, Importing Libraries for Modelling
31669, Preprocess Data
41527, The number 4
1372, cross our Pclass with the Age cat
20569, It helps in determining if higher class passengers had more survival rate than the lower class ones or vice versa
10415, Score pipeline
16742, modeling
28404, Neural Network
28305, Finding Root mean squred error for DecisionTreeRegressor
28684, MoSold
4520, RandomForestRegressor
8374, Ensemble
732, removing these two
24741, Outliers
38999, now shuffle the data set randomly We also have to make sure that x input in general sense and y output labels in general sense remain in sync during the shuffling
39246, Remove outliers in price data
2186, Validation curve
38778, Select investment sales from training set and generate frequency distribution
999, We also impute the Cabin missing values We use U for Undefined
18851, Visualize ROC on the Validation Set
4358, Feature
21372, Check missing data
6699, Lets create some swarm plots between Survived and Fare
21577, Aggregate you datetime by by and filter weekends
9427, stripplot
19592, city code
28199, we can finish up this part of speech tagging script by creating a function that run through and tag all of the parts of speech per sentence like so
14798, Random Forest Model
13305, Ensemble modeling
11214, Ensemble prediction
34254, Predict on Entire Data Set
33260, Data Preparation
40797, Checking date
26495, The corresponding labels are numbers between 0 and 9 describing which digit a given image is of
37416, PCA Example
4067, we can finally chain all of these transformations in a pipeline that be applied over our dataset before training testing and validating our models
37574, Important Variables
27552, Display interactive filter based on click over legend
24294, now we can plot that image
26069, Looking at the best probability achieved we have a digit that the model is 100 confident is a three
28937, About 65 of the passengers were male and 35 were female
40741, Large CNN
39224, Remove the duplicat columns from training set and test set
20293, Parch
12966, Embarked Sex Pclass and Survived
23792, Prediction and Submission
39848, Auto Correlation andal Correlation Graphs
30568, Aggregating Numeric Columns
40398, Fold 5
6456, PCA
10410, Run outlier detection on Overall Quality of 10
25389, Defining the model architecture Using ConVnets
36800, we create the chunk parser with the nltk RegexpParser class
17623, Fare per person
6678, Ada Boost Classifier
21365, Warmup Look at the Lonely Characters in selected text
8251, Univariate Analysis
41447, K Means Clustering
4771, 101 passengers from third class embarked from C 113 embarked from Q while 495 embarked from S
7523, Model 4 Exponential fit y A exp Bx
40323, add obtained user components as cluster features
11643, Cross validation
5906, Types of scoring in grid
17856, Create the ensamble framework
2516, K Nearest Neighbours KNN
4151, Random Sample imputation on Titanic dataset
20855, Some categorical variables have a lot more levels than others
15267, Split into Train and Test data
9769, Dummy Variables Encoding
4154, End of Distribution Imputation on Titanic dataset
11660, Decision Tree
41969, Convert Date Column data type from object to Date
16449, Cherbourg port is more save as compared to others
30143, Preparing Submission File
5302, The rest of the data processing is from the kernel by Boris Klyus
19003, Use test subset for early stopping criterion
197, LightGBM
22025, num var5
7915, Replace NaN values
9675, Tune leaves
2838, Libraries and Data
33603, Convert the data frames to numpy values
20267, Analysis of loss feature
10411, Remove the outliers from our data set
8989, the number of stories does not have a linear relationship with price
42727, draw the heatmap of float features
30635, Relationship between family members and survival rate
28375, Creating Training Testing Set
42575, The nested for loops below calculates every possible score way that our predicted values can produce and keeps track of which sum is built out of either a correct or wrong
13180, First let s take a look into Fare distribution
12962, Parch and Survived
21582, Correct the data types while importing the df
20073, Insights
42250, Perform feature selection and encoding of categorical columns
30590, Aggregated Stats of Bureau Balance by Client
15102, As expected those passengers with a title of Miss tend to be younger than those titled Mrs
12759, I explore the size of the dataset to compare it with the number of NANs in the dataset
32829, We are here at the data cleaning part
38248, find the top 5 RGB distributions in each of the 10 test images
33698, Yearly Series font
29870, Training
34844, Save Models
23267, Fare
30888, Final training and test predictions
16684, Analyze by pivoting features
42934, Saving the list of original features in a new list original features
32628, Naives Bayes Classifier
4465, We then assign social status titles to them for more in depth analysis
2357, Pipeline and Principal Components Analysis and Support Vector Classifier
32174, FLAT FEATURES EXAMPLE
26866, look at the effect of these two new filters
17726, pandas
39454, checking missing data in installments payments
42115, For our second model we ll use TF IDF with a logistic regression
39249, Import data
38651, Age
22005, If you now write code to
6307, Linear Support Vector Machine
97, This is another compelling facet grid illustrating four features relationship at once They are Embarked Age Survived Sex
17035, Rare values Pclass Sex Embarked
10963, KitchenQual
27054, Defining and fitting the Model
7228, LotFrontage
31536, MasVnrArea
38519, Pre processed selected text columns
5746, Checking survivors by sex
29060, Merging multiple outputs
18648, There are quite a few number of missing values present in this field
29759, Train the model
31283, Exponential smoothing
23601, Like I said I am compiling my understanding of gensim from a lot of sources and one of them used multiprocessing stating that it might be painfully slow doing otherwise
38090, Activation Function
36778, Training Data
2276, Building Machine Learning Models
8407, Garage areas and parking
40731, Try to compare the layer 1 output
28767, we create a function that plots the accuracys and losses of the model output
38988, Traing for positive sentiment
11663, Extra Trees
11033, Linear SVC
13747, Gradient Boosting
13899, Around 300 passengers survived and more than 500 died
7615, For some linear models QuantileTransformer gives best score for C
37023, Top 15 second levels categories with lowest prices
19725, For each item
30086, Naive Bayes Algorithm
21674, Memory reducer
8341, Moving on to neural network we simply use Keras for easy implementation of multi layer perceptron
4586, We also test the predictive power of the following features during model evaluation
6206, Logistic Regression
15734, ROC AUC Score
30282, Mortality Count 5
6799, Predi o modelo
41744, Some fancy pants code my teammate Alex made for the toxic comment challenge that I ve expanded on and adapted to this challenge
6019, Save Model
33324, Training and validation data generator
5604, Season
21106, Spilit training validation and test dataset
29373, Create Train Test
3645, Feature Engineering continued
23225, Here all the values which are True are the duplicate ones
250, Library and Data
11612, Embarked I fill missing value by most frequent appear value
8128, Correlation between values and SalePrice
4948, use the Box Cox transformation to deal with the highly skewed values
18377, R Square Error
3320, Version Added new plots from Seaborn release
37488, Multilayer Perceptron
21955, Confusion matrix
29540, we re gonna take 10 data from the training data and use it for data validation
967, Producing the Submission file
15265, Convert Features into Numerical Values
28492, Model fitting
16466, Pearson s
8842, For some categories the order is quite important like OverallQual
22295, Analysis and Submission
22474, Plotting with different scales using secondary Y axis
34847, Finding Duplicated columns
6661, Split the data into train and test set for classifcation predictions
41785, Optimization
10333, We conclude that
12525, so these are the columns which have missing value and have numeric data
16339, Random Forest
43137, Plot Loss Trend
41205, load our submission format and fill SalePrice columns with our predictions data
1289, Cross Validation 5 1
4874, Function for Scoring Training and Testing
37924, Final Prediction
37046, Stemming is the process of producing morphological variants of a root base word Stemming programs are commonly referred to as stemming algorithms or stemmers A stemming algorithm reduces the words chocolates chocolatey choco to the root word chocolate and retrieval retrieved retrieves reduce to the stem retrieve
14710, SVC Parameter Tuning with GridSearch
35770, XGBoost
35508, Outliers
6599, Visualize default tree optional
22944, This may not be the best feature for us to use because most of the data is considered null
35462, Visualiza the skin cancer at head neck
25809, Fit the Model with GPU
15033, Embarked with value of S have the most count of data set the null of Embarked to be S
37109, Code in python
21335, Chu n h a d li u
35879, Selling prices
39160, The last step is creating the submission file
43026, Model 3 Overfitting
12003, let s check the R Squared for lasso
2151, Imports
27565, ps ind 01
4639, Count of distinct categories in our variable but this time we don t want to count any nan values
15637, Re train the model on new features
36737, Comparing MAE values of all models
22642, Model 6 XGBoost
16108, Map Fare according to FareBand
13224, GridSearchCV for SVC
32945, Remove all special characters split by spaces and remove stopword entries in list for train and test
28140, Submitting the predicted labels as a csv file
20770, Tf idf space
5939, there are many columns that contains character values
23249, We do the same for test data as we do on train data
1386, Predicting X test
9023, There are
30717, One hot encoding
8409, Total Basement Area Vs 1st Flor Area
16668, Feature Transformation Categoric Variables
29963, Inference
1409, I need to replace a few titles with other values because these titles are not as popular and have a low frequency of occurrence in this dataset
6056, BsmtQual missing values for No Basement
27973, Mean Frequency
32713, Bidirectional LSTM
2663, Constant Features
9214, Family Survivor by Familymember
3778, Classifiers
38281, Imputing data in Basement related columns
15825, train and test data split 70 for training and 30 for testing
29467, Basic Feature Extraction
21239, visualize how our predictions look like
25752, Surprisingly it still can in some cases
20737, Electrical column
1625, Linear Regression with Ridge regularization L2 penalty
37978, Build the CNN model
13963, Pclass Survival probability
28327, identifying the missing value in previous application
5172, Each model is built using cross validation except LGBM The parameters of the model are selected to ensure the maximum matching of accuracy on the training and validation data A plot is being built for this purpose with learning curve learn org stable modules generated sklearn model selection learning curve html highlight learning curvesklearn model selection learning curve from sklearn library
9910, Get Model Score from Dropping Columns with Missing Values
18817, Elastic Net Regression
23386, One issue with yolo is that it is likely to contain more cells in its label grid that contain no objects than cells that do contain objects
12211, I hope it is now evident why I kept implementing a get features name method in the previous classes
1166, Most of these can be filled with None
6538, here i have merged some columns to just reduce complexity i have tried with all the columns but i didn t get this much accuracy which i am getting right now
5257, First of all we are going to train a baseline RF model
24059, We ll drop features with less than e importance you can change this threshold
18140, Here we average all the predictions and provide the final summary
21176, Displaying output of layer 4
31182, font color 5831bc face Comic Sans MS Before Scaling
24483, Training 10 folds for 10 epochs each strategy 3 improves validation score to 0
38925, Keras NN Model
8357, Survival rate by cabin
35939, Model evaluation
11816, SalePrice and Overall Quality
12616, Fare
12533, Cross validation of these models
19139, Model 2 input 784 ReLu 512 ReLu 128 sigmoid output 10
26288, Backward propagation with dropout
22774, Setting up the LSTM model
27175, Linear Regression
10140, let s try the same but using data with PCA applied
38893, Cost Function
34181, Interpreting CNN Model 2
6041, This code block finds best combinations for you
11399, time to deal with categorical features
13458, There are only 2 missing values for Embarked so we can just impute with the port where most people boarded
16739, embarked
36366, Train model on the full data and make final predictions
22655, Performance
2411, Numerical Imputing
36830, Accuracy
12532, Creating models
31529, Separating data based on data types
840, Model tuning and selection with GridSearchCV
13852, Numerical features
4299, Inference
15979, This feature is like the Name feature
11923, Feature Importance
28249, Little advanced visualization
4063, Clustering over the 3 most important principal components give us 80 of explained variance
33313, Plotting Decision Boundaries
38466, Columns with missing values either in train or in test
42837, We achieved good AUC on both training and cross validation
24471, Distribution regarding to target
18517, We perform a grayscale normalization to reduce the effect of illumination s differences
3982, logarithm the value of the house
32024, we can apply one hot encoding
33302, Model Selection
35396, Preprocessing by BoxCox
16379, Fare Band
28957, Some of those look quite good
26935, That s almost all we can train the classifier and evaluate it s performance
41343, Categorical Features
6226, saving files
2534, that we have checked the devices available we test them wth a simple computation
34054, Play with the parameters and do Cross Validation
37098, Outliers
25683, System with SOCIAL DISTANCING
6317, Multi Layer Perceptron
40622, Just a technical point add caching for the data pipeline
5326, Display spots of latitude and longitude
4184, Date and Time Engineering
35881, Sales data
36712, Different Tokenizers
26730, Plotting sales time series accross categories
27847, Top 20 2 gram words in sincere and insincere questions
41454, Feature Passenger Classes
35565, An important thing to note here is that this weird relationship between meta features and target does NOT extend to the test data generate predictions to demonstrate this
14204, Checking with some cross validation using stratified k fold
13301, We can now rank our evaluation of all the models to choose the best one for our problem
25848, Dealing with username
43270, Treinando o modelo com os dados de treino
17450, RandomForestClassifier
23558, These 37 records have living area greater than its total area
23196, Correlation among Base Models Predictions How base models predictions are correlated If base models predictions are weakly correlated with each other the ensemble likely to perform better On the other hand for a strong correlation of predictions among the base models the ensemble unlikely to perform better To sumarize diversity of predictions among the base models is inversely proportional to the ensemble accuracy make prediction for the test set
5485, For TotalBsmtSF only one outlier was there which was at index 1298 and same got deleted with GrLivArea
11680, get dummies creates a new columns for each of the options in Sex so that it creates a new columns for female called Sex female and new columns for male called Sex male which encodes if that row was male or female
37110, Code in python
1569, Cabin Number
2004, Again with the bottom two data points
16853, split
7299, Observation The distribution of SalePrice is unimodal in nature with a peak at 1500000 dollars
7853, Random Forests A Slight Improvement
12767, i normalize the test set as i did before and fill null values in the dataset as well
9027, Since there is only 1 null row with BsmtFinType2 and BsmtFinType2 is highly correlated with BsmtFinSF2 I got the BsmtFinSF2 value of the null row
24244, Name length
27909, Simple Imputer
34761, Fitting model on Training data
38715, Great so now the GPU is working and should speed up our computations
16874, Pclass Vs Survival
36597, step is loading the data and do pre processing and visualising it
1292, Ensembling 5 4
20571, Fare denotes the fare paid by a passenger
6070, MiscFeature ininfluencial drop it
19982, There are a lot of hyperparamater tuning when it comes to Keras such as
20473, Comparison of interval values with TARGET 1 and TARGET 0
13793, XGBoost
26345, We predict SalePrice column
37545, Data Preparation for Pytorch Model
26076, Chart of the number of digits in the data
41913, We can t really balance the size of our dataset by down sampling because almost all images are very large because of this we are going to resize our images instead
36775, Making Submission to Kaggle
35503, Evaluate the model
10452, Gradient Boosting Regression
35084, Comparing the images before and after applying the PCA
26399, There are twice as much men on board than women
7016, Evaluates the quality of the material on the exterior
3709, Lasso Regression
32797, Generate level 1 OOF predictions
15388, load training and test data
27097, The intermediate feature vector is the output of pool 3 or pool 4 and the global feature vector are fed as input to the attention layer
7218, Basement Null values
17855, we prepare the submission dataset and save it to the submission file
6229, Convert and create new features
14740, Dimensionality Reduction
14954, Extract title from name
40681, Show the distribution of distances of data samples from the most frequent template
2942, Separate Numerical and Categorical Features
34014, Monday 0
20078, Insights
25883, Histogram Plots of number of characters per each class 0 or 1
16622, Bayesian Optimization
21846, Training
23791, Modeling and Training
14334, One Hot Encoding
11443, 128187 Load and Read DataSets
8052, Advanced Regression Techniques
10660, Compare Model
37134, cross validation
23584, Submission
16666, This feature is from Konstantin s kernel
33876, Gradient Boosting Model
39975, Check seasonality of hour if we assume feature time is minute
19892, Getting the new train and test sets
10868, Transform the Name feature
3983, Check some Null s
14873, quite a few more people died than those who survived
4976, We can easily create a new feature called family size for Parch SibSp Himself Herself
15073, Embarked
12985, Train and test split